r/SneerClub archives
newest
bestest
longest
55

An interesting article that discusses EA, looking at its connections to rationalism, longtermism and MIRI etc, and ponders whether treating “moral good” as something that can be quantified and optimized is truly the best approach.

TBH, I think any organization with limited resources (which is all of them) is going to have to do *some* kind of quantification about where to allocate resources. The question of what *is* good is usually a lot more interesting and revelatory than the simple quantification stpes though.
Identifying a metric should be immediate cause to look *very suspiciously* at that metric, and also spend far more time looking outside that metric.
Of course, but that applies to *every* science that uses metrics. Metrics are still useful for studying things in the same area, like comparing the efficiacy of 2 different malaria interventions. The problems come when you try and compare a malaria treatment to a 0.000001% chance of averting an apocalypse in 40 years.
Maybe choose several metrics, then optimize all of them simultaneously with some mixing function that penalizes very low scores on one?
Maybe think harder about why you feel such pressure to put incommensurable numbers on things
"The morgans fear what cannot be purchased, for a trader cannot comprehend a thing that is priceless."
It’s easier to agree on and coordinate around an explicit resource-allocation system than an implicit one where people just do whatever they feel like is right in the moment

Sad to see some of the discussions about these things end up basically being ‘what if the homeless person buys beer with your donation’ with a lot of extra words. (Somebody in the comments made a remark like this)

Then they've contributed to the local economy, enjoyed a refreshing beverage, and by interacting with a member of the store's staff while being visibly homeless they have also provided a valuable distraction for any shoplifters.
I mean...that seems flippant. Giving an addict the thing they're addicted to often does not help them.
I was thinking more like this is a 'basic slightly drunk conservative uncle' talking point, but with more words. But at least they end up actually giving to charity, and not use it as a excuse for inaction. And even if they aren't an addict, so what if they drink a beer with it.
Nothing wrong with giving an alcoholic a beer, they may die if you don’t!
interesting that you assume this hypothetical homeless person is an addict
I'm not assuming that they are, no, but certainly many are.
Homelessness statistics show that the largest growing group of homeless people are entire families. The top causes and factors of homelessness are entirely economic, the reasons usually being the loss of jobs, loss of income in a two income family, relationship issues like domestic violence and divorce, lack of affordable housing, etc. A relatively small fraction of homelessness is actually caused by addiction, and even then, substance use can often be self-medication.
In the US, mental health has been the largest factor of homelessness ever since deinstitutionalization under Reagan. But you're right, under the current string of economic crises, the fastest growing group of homeless people are simply economically excluded from having a roof over their head.
i mean we somehow went from 'giving a homeless person money, with which they use to buy beer', to 'giving an addict the thing that they're addicted to' you do see the differences between these two characterisations of the situation, right?
Yes, but when the concern is buying a beer - i.e. alcohol - that is usually what is being implied.
If we're talking about what tends to be implied when this talking point is used, *usually* it's about characterising homeless people as irresponsible spenders and looking for excuses not to help them. Very few people who bring this up actually care about the *wellbeing* of the homeless person, in the sense of wanting to help them fight addiction. If that's what you care about, then you should probably support direct cash transfers to homeless people!
Or maybe people can decide how to spend their own money 🤷‍♂️🤷‍♂️🤷‍♂️
This drives me nuts because, for alcoholics at least, that alcohol is likely keeping them alive and preventing them from having potentially fatal seizures. And the paternalism, that's even worse.
Like, there is a point that situations is often a lot more complicated and you should really listen to medical personnel rather than making assumptions about people whose circumstances you don't know, but the "no, alcohol is just good for you akshually" thing the internet has going on is... not great. And the threshold for being an alcoholic is *far* lower than the point where you start suffer seizures from withdrawal.
There is no “threshold” for being an alcoholic, rather there’s a small galaxy of intersecting dependencies and impacts on one’s life which produce the label “alcoholic” across a wide range of rates of consumption and degrees of physical dependence. In addition, due to how physical dependence works, especially over time, seizures set in very differently in different circumstances - the phenomenon called “kindling” is a real bitch here. If somebody on the street wants a beer I don’t think I or any doctor is going to spend that much time considering all the relevant diagnostic criteria - which are many - first; if you wind up in the hospital with withdrawal-induced illness, whoever sees you will eventually tell you (unless you’re going to do medical detox) not to stop drinking but to keep it down to what you think is your safe lower limit, not prescribe you an exact amount.
> but the "no, alcohol is just good for you akshually" thing the internet has going on is... not great. I never said this, I don't even drink myself because of it. I was speaking to the pearl clutching response some people have about homeless people choosing what they want to spend their money on, and how that response can fly in the face of the common rhetoric behind it, usually the claim that they don't want to contribute to harm should they choose to buy alcohol. > And the threshold for being an alcoholic is far lower than the point where you start suffer seizures from withdrawal. If you're at the point where you will seize without alcohol, which is what I was talking about, you are an alcoholic, or a very under-treated epileptic.

I agree with the overall sentiment in the essay but it just doesn’t sneer. Engels said it better more than 100 years ago:

Philanthropic institutions forsooth! As though you rendered the proletarians a service in first sucking out their very life-blood and then practicing your self-complacent, Pharisaic philanthropy upon them, placing yourselves before the world as mighty benefactors of humanity when you give back to the plundered victims the hundredth part of what belongs to them!

Oscar Wilde has some good lines, too > It is immoral to use private property in order to alleviate the horrible evils that result from the institution of private property. * > the best amongst the poor are never grateful. They are ungrateful, discontented, disobedient, and rebellious. They are quite right to be so. Charity they feel to be a ridiculously inadequate mode of partial restitution, or a sentimental dole, usually accompanied by some impertinent attempt on the part of the sentimentalist to tyrannise over their private lives. Why should they be grateful for the crumbs that fall from the rich man’s table? They should be seated at the board, and are beginning to know it.

Besides the moral hazards of advocating these positions, these ideologies provide an overly simplistic formula for doing good: 1) define “good” as a measurable metric, 2) find the most effective means of impacting that metric, and 3) pour capital into scaling those means up.

This bit stood out to me. It never occurred to me before this that longtermists (and EAs more generally) are Goodharting the very concept of “goodness”. How ironic.

Goodhart is a perfect reference here. Unlike some, I’m loathe to criticise EAers for *attempting* to reach a broad quantification, to the best of their ability, of what the most good there is to be done given the usual constraints. Some people seem to think this is an affront to moral thinking in and of itself, which strikes me as very silly, even if the EA or utilitarian projects are in the wider scheme of things misguided along these lines. However, as Goodhart noted, when this attempt becomes an accounting exercise to the exclusion of the genuine moral thinking which motivated it (you have limited resources to think with too! Don’t spread them too thin on doing lots of sums!), the exercise inevitably goes off the rails.

this is literally my first encounter with the phenomenon of longtermerism, in fact i was just routed here from a conversation on another board.

wall of rant incoming, sorry if this is all well-established :/

i don’t think the basic rubric is necessarily unworkable, but oh my god the *math* is so conspicuously terrible it’s just impossible to believe it could result from honest ignorance.

who trucks in quadrillions of years without even contemplating a discount rate for future utility vs. present utility? i mean, save one person now or a dozen people in a mere million years. exactly how little compounding is necessary for the former to exceed the latter?

who can with a straight face advance an argument like “if we can just make it past the next hundred years [then suddenly a miracle will occur and all the existential threats will be behind us]”? i mean, what’s the chance one of our existing crises will snowball into an extinction event within 100 years? idk, 1% (jesus, i *wish*)? ok, so what’s the chance of earth being habitable to humanity in 10000 years? 99%, you schmuck? maybe 36% would seem more reasonable to an actual adult? but oh yeah if we make it past Putin outliving the sun is cinch ASSHOLES.

who has the unmitigated hubris to assume that neither the law of unintended consequences NOR propagation of uncertainty pertain to their plans over events of such scale and duration?

tl;dr: it’s as shoddy as the “wE’re In A sImuLaTioN” sewage. and you got Thiel in on it, so you KNOW it’s dishonest in the most poisonous self-serving way.

You don’t contemplate a discount rate because in a space-time block universe, utility doesn’t change just because it’s at a different point on the map
someone smart may have originated that for a smart reason, but it's pardon me balderdash. things compound, and real things typically compound faster than money. a discount rate is nothing but acknowledging compounding.
In this case utility is, by definition, a fixed quantity everywhere at all times, you’re talking about exchange value
I thought the point of discounting was to account for the uncertainty about future predictions increasing with time? If it isn't, where do they account for that?
That might be one reason for them to include a discount rate, but it would be a different reason Outside the scope of my disagreement with our friend above: I can see a few reasons *not* to discount against uncertainty, for example because (a) they’re not proposing a traditional investment strategy anyway, or (b) uncertainty is built into their projections already, or (c) they already give reasons why uncertainty of that kind isn’t supposed to matter in their calculations for large-scale long term projects Even Nordhaus’s (notorious) discount rate for climate change was only in there because his model was supposed to build in projections of climate change’s financial cost vs the cost of fighting climate change, and *that* was on the basis that the range of projected scenarios for climate change *had* in that model to include different projections of the economic growth scenario
if you're talking about lives, they *throw off value* into the future, but not the past. ok conceivably you think it's a wash or a net negative for the average loss. but then any effort or expense in the name of "altruism" is a net negative, right?
No, I used the word “utility”, as in “utilitarianism”, as in the ethical value theory which motivates this entire apparatus, as in not a monetary value but an ontological moral standard you only peg a dollar or exchange value on once you’ve got the rest of your utilitarian analysis in hand about how to maximise it This isn’t actuarial science (yet)

I’ve repeated this multiple times in this subreddit, but it just keeps becoming necessary to do so: Effective altruism is not longtermism or X-risk. Many people in EA are concerned about X-risk and X-risk gets a TON of funding from rich people, but in terms of people’s day-to-day commitments and priorities, MIRI/Bostrom-types are not even close to the majority (maybe 25%?) of effective altruists. Almost everybody else is worried about animal rights, global development, pandemics, etc. I’m judging this based upon having gone to probably a dozen conferences since 2011 and being heavily involved in / running EA student groups at three different universities (one in the south and two in the northeast).

This increased conflation is something that I’m growing increasingly worried about because I care deeply about global economic development and animal rights. For the longest time, EA was synonymous with massively increasing the donations going to Against Malaria Foundation, SRI, Obstetric Fistula Fund, etc. The fact that vague associations between massive weirdos like Yud and EA are enough to cause random people to sneer at EA without bothering to understand it is really concerning.

(Sidebar:The example used in the article about giving 0.00 to a homeless person is presented as if it’s something an effective altruist would obviously dismiss, but no evidence is given to support the claim. I consider myself an effective altruist and give generously to homeless people. I just don’t think it’s an ineffective use of my money for all the reasons mentioned in the article, which for some reason the author pretends are all ephemeral and unquantifiable.)

The EA community itself [thinks the two might be starting to become synonymous, and they have stats to back that up.](https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1)
Well that's horrifying. I used to point people to EA sources when they considered what to donate, but I guess that's over. They don't know it, but they're slowly consigning their "movement" to the bin of irrelevant cranks. I guess that's the price of the original sin of tolerating Yudkowskies nonsense.
Hm, aren't GiveWell and Animal Charity Evaluators still *doing their thing*? Like, I don't expect either of those to one day stop and say "give all your money to MIRI".
Well this is disconcerting and confirms all my fears. Still, I’d like to see polls of the EA community’s beliefs about topic importance rather than where funding is going. (A lot of this effect is driven by the fact that development and animal rights already receives a lot of funding so groups like 80,000 hours have started pivoting to AI which isn’t yet saturated by funding and talent. And a lot of the big funding numbers are driven by a handful of rich crypto nerds throwing their money into AI, but this doesn’t necessarily reflect what most EA folks care about.)
surely literally the point is not what people on a message board think, but where the money (the unit of caring!!) is going.
Both are important. I don’t think money is the only unit of caring, there’s also time and effort. In my PhD program there is a ton of money earmarked for paranormal activity research, but no student is actually doing that research because they think it’s silly. One view looking at the dollar amounts suggests “this program is synonymous with paranormal activity research” the other view rightly notices that most people have other priorities.