r/SneerClub archives
newest
bestest
longest
"longtermism, as proposed by Bostrom [...] is not equivalent to ‘caring about the long term’ or ‘valuing the wellbeing of future generations’" (https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo)
48

Good article! It does seem a little unfair to Bostrom to end on the “technological progress itself is an existential risk” note when Bostrom repeatedly emphasizes this already, but I think it still works as part of a criticism of the wider “longtermist” ideology.

I got curious about “other discussions (9)” and the top ones are

/r/futurology going, what kind of a moron would actually advocate the things he’s describing why is he attacking some dumbass strawman

/r/slatestarcodex going, …something vaguely negative about how it’s too general or misses the point? I’m not entirely sure because I just can’t make myself go through these posts without my eyes glazing over by the second paragraph at latest

By not reading r/ssc you missed out on these gems: > You can logically justify pretty anything if you divide by infinity. 100 trillion future lives against the current existence of say 7 out of 8 billion people. This is one of the reasons why **AI is so freaking scary, it's because it's rational and we're not**. (emphasis mine) Also one person going 'well we should just get more resources, and for that we need to maintain our current consumption levels'. They are being downvoted thankfully (but that might be because he is worried about ecofascism), as this is stupid we can consume less and also maintain a certain level of technical capability, there is no need to push people to buy a new smartphone every 2 years. (E: this is prob a bad example, but by removing scaling down on certain things you also remove certain important options, like for example, removing the private cars and going all in on public transport (which for co2 emissions is way way better than going to all Teslas (if we make the additional assumption here Musk stops making things which go up in flames randomly)) > 'ow you think longtermism is bad? Well, just saying it will lead to global surveillance isn't an argument, that doesn't have to be bad'. (My words) (I'm reminded of the 'I don't take peoples worries on global warming seriously unless they can explain [highly specific thing about global warming] to me' post. (There is certainly a NSFW post in this btw, on how certain smart people rationalize that they are the smartest because they weirdly gatekeep their idea of others using their own specific tests (See also primalpolies 'ask your dates about IQ' test. The character called 'Anton Chigurh' (the psycho killer) also does things like (but taken to an extreme) this in the movie [No Country For Old Men](https://www.imdb.com/title/tt0477348/), and people weirdly interpret it as being a sign of high IQ and not just being really socially awkward (notice he doesn't talk with people, he mostly just follows a script, or lectures them in a vague way)). > Nice summary. That article was really long (Note, there was no summary, just a cherrypicked quote) The effective altruism subreddit is just as bad. > Ha, we got hatemail, that is a sign we are doing the right thing. (My words)

So I think it’s quite possible that an AI-like human extinction event can only happen through an unprecedented combination of unchecked-arrogance, megalomaniacal over-ambition, and overall stupidity. From this perspective, “longtermists” could well end up being the most dangerous existential risk in the world.

Since it's impossible to prove or disprove a hypothetical claim like that, I declare it rational, and would like to invest $788 million in your institute.

I have to admit I like longtermism as a thought experiment, but not something worth taking very seriously in actual risk analysis (fiction and art are rich veins, though). Torres’ critique here is pretty good; he had a similar essay not too long ago that I liked too.

10^58 future gods is a whack estimate— it presumes there are no aliens

I think this is a kind of uncharitable reading of Bostrom’s paper that they cite, especially if you compare the quotations to the surrounding context in the source material. I don’t think it’s accurate to say Bostrom or most people who share this philosophy hold the robot-esque idea that individual suffering and death is morally inconsequential and that short-term catastrophes aren’t a major problem; in both cases either overall or relative to long-term risks.

There is sometimes a cold, detached language that’s used in certain discussions about these things - e.g. that these things may not be a big deal “for the chance of humanity’s long-term survival”, but I don’t think that should be confused with the person’s overall moral values or that they don’t think they’re an intolerable tragedy “for humans”. I think this is more due to a quirk of the community than an inherent lack of empathy.

I think there are definitely some valid arguments to be made that some may be overvaluing or over-allocating towards the long-term, but while I hypothetically see how such a risk could potentially become a big problem, “I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today” feels like some kind of weird fear mongering about fear mongering.

(Disclaimer: I am self-evidently not “from here”.)

I still always find it funny that Boström refers to this weirdo and not the 19th idealist philosopher. (admittedly, he was also a conservative)