r/SneerClub archives
newest
bestest
longest
Elite Universities Gave Us Effective Altruism, the Dumbest Idea of the Century (https://jacobin.com/2023/01/effective-altruism-longtermism-nick-bostrom-racism)
99

Counterpoint: it’s not the dumbest idea of the century, only because there is some very serious competition for that title

It’s hard to make a judgement because they all orbit around each-other.
Plenty of ideas in different worlds... Though I'm not sure I know which really come from which century: "prosperity gospel," "multi level marketing,"...
Qanon

I liked the article but I have one nitpick.
The effective altruists always were futurists.
She’s so close in her own article and experience to EA that she probably knows this.
And still

“effective altruism and offshoots like longtermism”
Technically longtermism came later, but that’s just a different name for their brand of futurism, which came first.
It’s such a tiny detail but man it gets on my nerves. I see no reporting that the futurists made EA.

I heard you liked to be effective in your altruism. So I put some effectiveness in your Effective Altruism so you can be effective while you’re being Effectively Altruistic.

[removed]

I like EA, but the main economic principle they use is value maximization, a model that is not really used by firms at all in their financial decision making processes, and was mostly created for the purposes of arguing against social welfare. I agree with a lot of EA principles honestly, some of which you mentioned, but the economic elements of their tools come off a little like one guy took a microeconomics 101 course, got a little Malthus, and then decided to invent a philosophical tradition around those ideas. The economic school of thought that exists today that’s most closely aligned with EA’s ideas of economics does not have anything to do with value or utility, and is instead based mostly on a model by William Niskanen, called budget maximization. The end state of the rational choices embedded in the model is a near infinite increase in budget, and nearly nonexistent social efficiency. The guy who invented that model essentially designed Reagen’s approach to economic policy, and founded the Cato Institute, a libertarian think tank that advocates for reducing social security, medicaid, and basically every social program that exists in the US. It’s all a very far cry from Peter Singer’s initial goals for EA.
>I like EA, but the main economic principle they use is value maximization This is not wrong, most EAs think you should do whatever maximizes expected value (or whatever they *think* the expected value is) and don't have any notion of when that makes sense and when it doesn't. But in principle, I interpret EA as being about prioritising things based on evidence and logic - and I agree with it completely.
Yep, this is more or less exactly how I feel. Expected value is something incredibly old and specific in the political economy literature, and it hasn’t been used as a shorthand for anything since the field of macroeconomics started in the 1930s. Assuming away the implicit assumptions embedded in the model in favor of the shorthand applied form is not something working economists do, even if it is something academic economists do in their non-peer reviewed working papers. It’s weird to see philosophers taking up the project and assume away the situations in which it doesn’t make sense to apply it; my suspicion is that most of them are simply unaware that it’s not an end in itself. All of that being said, my critique is with the tools they use to put their goals into practice, and the extent to which the tools they’re using are capable of producing those outcomes. The ideas are solid and I believe at a base level nobody would really disagree with the top-of-the-iceberg principles — my issue is that it’s a really slippery slope between expected value, the Niskanen model, and bad faith arguments from a HBD position.
Eh? Expected value is a mathematical concept, comes up in probability all the time. The expected value of a random event is the sum of the possible values weighted by the chance of each occurring. It's the first moment of a distribution, for a more technical name. Most first year statistics or probability classes will teach it, it's not some never-referred-to concept from dusty old papers.
Yeah anyone who has played some poker should be familiar with the concept.
Yeah but why expected value (mean) and not... median. Or mode. Or the 95th percentile. Or any other statistic.
Expectation does have some properties those don't, like capturing high-effect, low-probability outcomes. And no, I don't just mean AI stuff. Things like "Trump gets elected" as viewed by a person in November 2015.
Yes, but EAs only ever focus on mean. They rarely talk about the rest. Boiling everything down to one number, and often when other statistics like median are more suitable for use.
I think we’re in agreement. Importantly, I’m referring to its use by firms in the decision making process, as well as its uses by economists in the field, outside the educational system! It’s why I described it more or less as 101-level tool, on which you seem to agree.
I agree. I didn’t know (EA) that it got blended with this new “longtermism.” And even that has become the malformed extreme of what was originally conceptualized. Most people would think 50-100 years (max) is plenty long term thinking. I think more in 20-30 years range. This is the point we have to have the next gen solutions /approaches for next gen problems. Or your play from behind in the deck. We’re currently dealing with the extreme short term ism strategy on many things. Here and now only. Finding fixes once twenty have dog piled on another, to where no first princples approach then ensues. Most of us would laugh at this long term billion year mark. For me, this exactly why EA is great. Incremental cost efficient gains. And while some push for this, we can have other potential approach in our diversified portfolio. Technology will change so drastically in just the next couple decades that the new problems and then old solutions will be potentially unknownable; that is esp with a singularity on the horizon. Look beyond that 30 years (100 years for sure) mark is a bit satirical even. You might get a handful of prediction right out of thousands that are more fashionable, and won’t likely even invest in the current individual problem, little lone having the right direction mapped.
I think the rate of discount for future things is pretty dependent on what is being talked about. Eg when dealing with nuclear waste that takes centuries to decay, you should think about those centuries. In EA, longtermism is more of a license to *discount* the present and to engage in Pascal's Wager style sophistry.
Yea I agree generally with the first part. It’s weighing your proprieties. Second part.. The thing is the present is always progressively influenced by the future. So if we keep moving towards it, we lessen the bundles of issues. Esp if we hit more root causes, which they reverse what the perception of the modern world (“everything’s going to hell”) media window into negative stimulus fears and instead we can continue on an upward spiral as we’ve seen over the arch of history. Sure there’s dark ages but it would appear progress is an unassailable force if we avoid the major catastrophes and allow people and markets to be generally free to roam. This has plenty of side effects in its own. Hence why we adjust with the invisible hand. So I think climate change would be a good example. Maybe it’s increasing effects now, but we’ll really pay in 50-100 years when the world is reshaped completely due to weather and climates drastically altering. So we work on it now. Many of the long termist’s are looking past that and not acting in the present. This is where my consternation lies with their thought path. Thinking of colonies on mars and the future precious mineral were going to mine off asteroids. They can in their own right (philanthropically / private hobby ) fund these ventures. That’s fine. It helps develop science further on the free market and increases demand for more STEM jobs. Great. But governments have to work on more manageable scales mostly. We can fail to build infrastructure because we want people to have a good life in a even a couple hundred years. We’ll have tech’ed ourselves out of our current problems anyway by then. The private market can go whatever direction it want but we have to use government to fund the imbalance and blind spots of the market natural pull and deceptions (imho).
Longtermists are complete hacks. They don't even get the concept of something as basic as a discount rate.
Whether EA is dumb or smart depends on what you mean by "EA." If you just mean "assessing the short and long term effects of specific kinds of interventions, predicting what the long term impact and effectiveness will be, and using that when choosing between multiple potential ways to spend time and money," it's smart. Lots of people do it already, though not necessarily in the same way. Looking up how much money a nonprofit spends on marketing vs. its primary mission before donating to them is pretty common, and is one way of approaching "lowercase-E, lowercase-A" effective altruism. But *Effective Altruism* is also a specific cluster of social circles, influential bloggers, wealthy donors, and funding organizations that share a very particular idea of how one should assess "effectiveness" and rank "impact." That specific group has fixed its gaze on things like like "Invent sentient machines before anyone else, so they'll like us and not destroy humanity" as *the most important and impactful altruism possible,* via their idiosyncratic perspectives. Their rich funders have done *considerable harm* to other people, at scale, by applying suspiciously self-serving ethical rules like "Making extremely large amounts of money by breaking laws and cheating my investors is morally good because I will donate to causes that I believe will benefit ninety-trillion humans in the year 10,000." Even worse, this specific organization/social cluster tends to insist that these idiosyncratic priorities are not simply *their priorities,* but *the objectively most important priorities, period,* because of their perfectly objective and rational means of determining what is 'effective.' So… effective altruism, good! Effective Altruism, *dumb.*
Long story short: the capital letters are relevant.
Wouldn't dare. Wouldn't dream of [sneering at charity under capitalism in general.](https://en.wikipedia.org/wiki/The_Soul_of_Man_Under_Socialism)
It's not dumb, it's regressive taxation and managerialism in disguise. Effective altruism states that we owe the future people just as much as we owe the people today. It could've come from the gilded age. How managerialism fits into this is that this allows the elite to imagine what decisions they can make today which will determine tomorrow. In his day, Carnegie used philanthropy to promote race science. Charity or private wealth for the public good is bad in more salient but obvious ways when you consider the self-interest biases of the wealthy elite.
…you are a sympathizer it is so stupid, as if they’re the first people to talk about economics in charity
"Your manuscript is good and original, but what is original is not good; what is good is not original"

I don’t think it’s the dumbest idea of the century, but even if it were we still have plenty of century left to go and will unfortunately have more computer scientists walking around as well. A recipe for disaster I’m afraid.