r/SneerClub archives
newest
bestest
longest
Understanding "longtermism": Why this suddenly influential philosophy is so toxic (https://www.reddit.com/r/SneerClub/comments/wvsdo9/understanding_longtermism_why_this_suddenly/)
68

Émile P. Torres writes in Salon about longtermism. Mostly serious, though with at least one [snerk] moment:

Bankman-Fried has big plans to reshape American politics to fit the longtermist agenda. Earlier this year, he funded the congressional campaign of Carrick Flynn, a longtermist research affiliate at the Future of Humanity Institute whose campaign was managed by Avital Balwit, also at the Future of Humanity Institute. Flynn received “a record-setting 2 million” from Bankman-Fried, who says he might “spend billion or more in the 2024 [presidential] election, which would easily make him the biggest-ever political donor in a single election.” (Flynn lost his campaign for the Democratic nomination in Oregon’s 6th district; that 2 million won him just over 11,000 votes.)

EAs will tell you this is objectively the best use of 2 million

Just imagine if they used it to buy voters copies of HPMOR.
Haha, there was the time they decided the objective best use of money was spending 30 grand "Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020" >Empirically, a substantial number of top people in our community have (a) entered due to reading and feeling a deep connection to HPMOR and (b) attributed their approach to working on the long term future in substantial part to the insights they learned from reading HPMOR. This includes some individuals receiving grants on this list, and some individuals on the grant-making team > >.... > >I think that Math Olympiad winners are a very promising demographic within which to find individuals who can contribute to improving the long-term future. I believe Math Olympiads select strongly on IQ as well as (weakly) on conscientiousness and creativity, which are all strong positives. Participants are young and highly flexible; they have not yet made too many major life commitments (such as which university they will attend), and are in a position to use new information to systematically change their lives’ trajectories. I view handing them copies of an engaging book that helps teach scientific, practical and quantitative thinking as a highly asymmetric tool for helping them make good decisions about their lives and the long-term future of humanity. Actually most of the things they funded were really stupid, but honestly I'm just glad they're throwing away their money uselessly instead of doing something actually dangerous, like, (to name a totally random example) boosting lunatic ideologues like Blake Masters or JD Vance into general elections they might actually win Edit: oops forgot [link](https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations#Mikhail_Yagudin___28_000__)
I am strongly in favor of MIRI and always have been, because it is one of the least objectively-evil things Peter Theil is willing to spend his money on.
I think MIRI has done it's fair share of damage in motivating a pernicious strain of longtermism (AI X-risk) which has the potential to implode EA and efforts by real altruists to reduce present-day suffering.
It's actually really bizarre to me that they seem to consider HPMOR this foundational work but no one seems to think it's worth the time or effort to rewrite it as an independent work that can stand on its own outside of the Harry Potter franchise.
I like the parallels between LW and The Atlas Society here.
[deleted]
I'd argue that there's a ton of independent works that are essentially shitting on some other creative work but still manage to stand independently. Heck, in some cases they've become more famous than the original, e.g. _Lord of the Flies_.
Agreed, you're right and I should have phrased it another way but I was myself being lazy. I should have said that I can't think of a way to separate HPMOR from HP because much of how the book works relies on the reader knowing the original. HPMOR is also Ender's Game fanfic but can be separated from EG without suffering for it. You can't do that with HP because the text works on the assumption the reader will be contrasting what they're reading with what they remember about HP canon. To the extent it's good, it's good *as* fanfic, I'm thinking in particular of the observations on Slytherin that would not be translatable if you filed off the serial numbers, whereas a lot of what *Yudkowsky* adds to the world is largely arbitrary restrictions so his characters have something to exploit, D&D munchkin-like. A book written only with his contributions would be joyless and I'd say more but the key to the letter between e and g broke and now I'm too annoyed to write about xanxic.
> Participants are young and highly flexible; they have not yet made too many major life commitments (such as which university they will attend), and are in a position to use new information to systematically change their lives’ trajectories. big phyg energy
This is the same post where they award a ~$20k grant to someone to learn to ride a bike and think about AI. It's probably my favorite post on the LW forums.

[removed]

I think a big aspect of it is that a lot of our actual, real problems - environmental, economic, etc. - are extremely difficult to solve, especially within the confines of the current system, and the failure to solve them would/will have dire consequences. That's a lot of pressure! But if you instead choose to solve problems that you make up, your failure to solve them will never have any consequences!
Plus, solving real problems would step on a lot of people's toes - these people will gladly pay for you to make up various longtermist problems, fly to conferences, and so on. There's a similar thing they keep trying to make happen in ethics, along the lines of claiming that e.g. insect population decline is a good thing because insects are suffering in the wild. That one is both more obscure and more malevolent.
The cynic in me agrees with you. Solving climate change is really hard and requires buy-in from a lot of people. It also requires a pretty large amount of scientific knowledge, and there are a lot of people already working on it. So getting attention (and more importantly, money) is really hard. And any solution you propose will be examined by a lot of smart people who will call you a dumbass if your idea won't work. But if you invent a really scary but very-far-off problem, this solves all of those issues. It's really scary, so you can justify spending money on your problem instead of sooner but theoretically less dangerous problems. It's made up, so whatever solution you invent is a lot harder to criticize. (How do you criticize an attempt to prevent General AI when literally nobody knows what an actual General AI would look like? And you can just make up whatever risk probability you want, since nobody has even a rough ballpark of when General AI will be possible.) And you invented it, so nobody is trying to solve it yet, which means you don't have any competition. Plus since it's not a real problem, you don't have to actually try implementing your solution.
If you assume that *every future human who has not been born including those in the very far future* (include "virtual consciousnesses" or whatever there as well) is as important as currently-living humans this will result in what may, to you, look like repellent value judgements, but in fact are sensical, logical judgements (assuming the magic future numbers are real!). Totally coincidentally, the issues that they seem to focus on are ones where very rich/powerful people would suffer
It's almost a twisted parody of *actual* wise long-term decision-making. Like, we've been begging them for *decades* to consider the welfare of future generations, and now they're doing it in some kind of fucking monkey's paw way where they're like "wish granted, we will now optimize for the googolplextillion humans who will someday be living in the nonstop-orgasm machine, at your expense."
Cosplaying The Culture
> “trying to keep the world safe from hypothetical diseases that don’t exist yet” I don't see what's unwise about this when I've been seeing people bemoan insufficient readiness for novel diseases for the past two years.
I imagine the people at the WHO catching heat for not doing it - when they have been begging for funding and action on it for *years* - find it a bit galling that this is how that work gets attention
It's pretty easy to motivate. First, future people matter. Second, the future may contain more people than exist today. Third, actions today can and will influence the size of long-term futures and the quality of those futures. Therefore, we should pursue actions that preserve, enlarge, and improve the future. Each of these in principle is difficult to argue against honestly if you wish to have a consistent ethical system. Where longtermists (and much of the EA community) goes wrong is in assuming the primacy of Bayesian epistemology and expected value calculations, which makes them victim to fanaticism (pursuing low-probability, high impact possibilities).
That, and that they assume that there is a moral responsibility to bring more humans into existence. Most people, I think, wouldn't really agree, while antinatalism isn't that popular most longtermists think that simply *not creating* uncountably many simulated humans is itself a moral tragedy.
Nah, arguing against it is actually quite easy. I just don’t believe that we are morally obligated to bring people who don’t currently exist into existence, because they don’t exist.
You don't have to believe you're obligated to bring future people into existence to believe in longtermism. You just have to acknowledge that future people will come into existence whether you like it or not, so you have to decide what kind of future you want to leave them. This is the basic motivation behind a commitment to environmentalism and climate change activism, for example.
Yeah but that’s not what they mean by longtermism. What they mean by longtermism is “infinite simulated future people having infinite orgasms so we have to enslave Africa to make that happen”, not “I have to make sure my children and grandchildren don’t starve.” One is a moral mandate to sink infinite resources into something that might not even happen and the other is taking basic precautions and not being greedy capitalist bastards
No, they really don't. And I'm saying that as someone who is likely on your side -- I argue against longtermists in EA all the time. The *vast* majority of longtermists place a premium on avoiding extinction, whether that be through pandemics, nuclear accidents, AI, and so on. The value of preventing extinction, regardless of your moral framework, is pretty high! Not a single EA I've ever talked to is remotely interested in the idea of "simulated future people" or building some version of Nozick's Pleasure machine.
Ok, but in that case I have no problem with those longtermists, I only dislike the people who call themselves “longtermists” and try to claim that any hedonistic action or even action not contributing to some imagine future Computer people is ontologically wrong because of said future people.
I think the obvious corollary here is uncertainty. The more time between now and the hypothetical future, the more time there is for something that your model didn't account for to completely change the world such that your plans and expectations are now irrelevant. There's also a greater chance that your assumptions re: the consequences of your actions today will be proven wrong even without anything unexpected happening, but I think we know better than to expect rationalists to avoid that kind of hubris. Especially when thinking in terms of optimizing a post-singularity world, we shouldn't forget that the entire reason it's called "the Singularity" in the first place is that we cannot meaningfully predict what will be on the other side from here; all assumptions and heuristics we can develop break down due to the sheer power available once we're there. This means that our ability to predict the consequences of our actions today on a post-singularity future are, definitionally, *zero*. So even operating in the same frame, we should discount the utility of our actions into the future as that uncertainty limits our ability to predict whether our actions make things better or worse, and prefer to make things better now and in the immediate future since we can more realistically and predictably do that.
Spot on. I've seen longtermists explicitly acknowledge this issue and, without any coherent defense, insist that we can move past it. In one lecture, I saw someone motivate the entire case for longermism by first acknowledging "it may be the case that the future is so uncertain that our ability to predict the future may fall faster than the magnitude of the problems we face grows. Uncertainty would cancel out the cost of inaction." He then made a few off-handed comments about superforecasters and how you can predict general trends even if you can't predict specific events, and insists that all you need to motivate concern about future populations hundreds of thousands of years from now.
I mean, if present actions didn't have a cost (including opportunity costs) then I can see how there could be arguments there. But if you throw a million orphans into the orphan thresher and it breaks down 85% of the way towards spitting out a benevolent god, you still have a million threshed orphans.
Yeah sorry, thats what I meant. They ignore present actions have costs, and then confronted with this they retreat to the motte that “longtermist causes generate short term benefits” (eg climate change mitigation), ignoring that what the overwhelming majority of them are actually worried about is AI risk, for which it’s not clear any amount of effort generates short term benefits
Because they're contrarians, and blue hairs and SJWs care about those other things so they instinctually turn their nose up at those very real concerns in favor of spooking themselves with science fiction instead.
Because they think those problems are at least mildly likely to arise within the following few decades, are the biggest threat to humanity making it through the century, and that the problems are hard enough that we stand little chance of success without preparation ahead of time. Also even if they aren't considered *most* important, you'd expect a philosophy called "longtermism" to support increasing effort spent on those things relative to the baseline. People try to invent all kind of other cynical reasons (solving fake problems, avoiding criticism, finding excuses to make futurist blogging seem important, etc.) but if you just read their writing then this is what they think, and if it were true this would just clearly be a wise course of action. There's not really a hidden agenda there.
Apparently, money laundering.

I feel like you guys are focusing too much on the vanishingly low probability of Flynn winning (<.01%) and not paying nearly enough attention to the potential value of a Flynn victory (infinite utils).

[removed]
Well, to be fair, of all the long-term causes you can lampoon, pandemic preparedness is probably the most sensible. (See COVID-19).

https://i.imgflip.com/6qxtkj.jpg