true longtermism is killing billions of people now to prevent an
apocalypse that you made up on the offchance that the descendants of the
survivors eventually want to try going to space again
There was an Astral Codex Ten meetup I was trying to avoid going to a year ago but some friends talked me into popping in for at least a little while. I started talking to someone who hadn't gone to any kind of rationalist meetup before, so I started up what I thought would be a nice conversation getting to know him as someone who wasn't just another rationalist. I was sorely mistaken.
He was someone who seemed to have avoided meetups for years because he found most rationalists to be too politically correct. He said, unlike LessWrong, the hub for "real" rationalists willing to go all the way is r/theMotte.
We started talking about the prospect of the end of the world. I brought up climate change. He brought up how if there was some way to prevent human extinction even if it would kill half of the rest of the species, he'd do it.
I acknowledged as a thought experiment that in a scenario when human extinction was guaranteed but I could press a magic button that would save humanity overall but kill half of all humans, I'd press it. I also repeatedly pointed out there's no realistic version of that that could ever happen without risking killing everyone anyway, like World War 3. He more and more emphatically repeated his point.
It was evident he had stewed his mind thinking about this often enough he had fantasized in detail him in the role of the righteous anti-hero the world needs. It was that rare conversation that was so viscerally creepy it made me feel physically gross, so I left. I concluded that was the last time I'd check a rationalist meetup even on a lark on the off-chance someone really interesting instead of offputting would show up.
If Yud is so candid with his same fantasies, I shudder to think of just how common this mindset is among rationalists overall.
And after we've burned all the easily accessible, energy-dense fuels. Not to mention the Milankovitch cycles, that are going to add to our worries by making the climate even less predictable than *we're* on the way to doing. Even doing something as technologically simple as keeping the lights on is going to be a major achievement for our huddled survivors.
I’m gonna hate myself for asking this, but since when does Yud
recommend genocide and how does he think that would be an improvement
over the robot god genocide? I hope I don’t have to read Harry Potter
fanfic to answer this question
AIUI, Yud and other longtermists believe that AI apocalypse will kill *all* humans, whereas other forms of apocalypse like nuclear war or climate change will only kill *most* humans, making them less bad. Of course, for most people, killing 99% of humanity for the chance to maybe prevent something that might kill 100% of humanity is both stupid and evil, but that doesn't stop Yud.
The biggest problem is this argument is invalid. There's only a little chance of restarting industrialized society after a nuclear war (good luck getting easily accessible natural resources, we already mined everything that was cheap).
Which means that even if the "preferable" nuclear war won't kill everyone, inevitable asteroid eventually will.
According to the incredibly naive mathematics that seem to permeate the space, an infinitesimal chance is still better than zero, because according to first principles 0 < 1e-99 and that's that.
It has the shape of a two errors make a right type fallacy, except it's two priors that tend to infinity (infinitely smart AI meets infinitely lucky survivors) that are supposed to lead to a realistic outcome.
Well, that's the thing for them. The chances of revitalizing civilization after some other apocalypse are technically slim to none, though there's just no chance after a robot apocalypse, so World War 3 is probabilistically the lesser of two cosmic evils. Shut up and multiply!
I think it's cute they think the idea that humanity must survive is rational. The universe doesn't require us. The only reason we want to survive is that the evolution selected for living things that wanted that. But we aren't a requirement for the universe, just an incidental consequence of it's rules. Wanting to live is irrational and emotional and perfectly acceptable, just outside the bounds of rationality. It has nothing to do with intelligence.
As is typical for these reactionary types, implicit in Yud’s reasoning is the belief that he and his friends will get to live.
It’s kind of like people fantasizing over feudalism while assuming that they’ll get to be a noble, when the overwhelming odds are that they’ll be a peasant.
99% that leaves millions alive. More like 99,99999etc.
Isnt the minimal genetically stable pop pretty low like in the thousands (give or take a factor or two).
I think it's about 100 (and worth noting that humans have *already* gone through a near-extinction event that left only about 100 humans alive, so we have unusually low genetic diversity as a species)
You see, Yud played "universal paperclips" (a pretty cool game) and that has totally defined his notion of what AGI will do from then on. One idle game has more creativity than he's been able to muster in years.
In fairness to Yud, the paperclip maximizer predates Universal Paperclips (Wikipedia attributes it to Bostrom in 2003). You can, and should, dunk on Yud for having ideas that are various combinations of stupid, insane, and impossible, but it really is the case that he has been at the forefront of saying insane things about AI for a while.
Ow boy do I have an idea for a sneerclub relevant post, as I reread hitchhikers guide to the galaxy and chapter 25 predates Yud. And is basically proto lesswrong fears.
Well it was just a random sidechapter, but arrogant supercomputer who designs it's smarter successor, which is given bad instructions so wastes everybodies time (this is the computer who gives the answer 42, but doesn't know the questions), successor (technically) enslaves the whole of humanity. Philosophers are against the idea of building the supercomputer (but due to intelligence being a superpower (something like that is said in the book, the super prediction powers of super intelligence (flashbacks to the physics via picture of grass thing)) the supercomputer convinces the Philosophers that they can gain money and employment from the whole event (so it should be allowed to run) etc. It is just one small chapter (25, if you have t book at home) but it has a very Rationalist feeling. Of course the difference is that it is less boring, not as wordy etc etc.
E: seems somebody put a copy of [the book online](https://www.deyeshigh.co.uk/downloads/literacy/world_book_day/the_hitchhiker_s_guide_to_the_galaxy.pdf) chapter starts at page 173.
A choice quote:
> And to this end they built themselves a stupendous super
computer which was so amazingly intelligent that even before the
data banks had been connected up it had started from I think
therefore I am and got as far as the existence of rice pudding and
income tax before anyone managed to turn it off.
This is now my headcanon. Have you ever seen Yud and the basilisk in the same room? I think it's interesting that they don't exist in the same timespace, almost as if they are actually the same entity.
What if Yud was recruited to discredit the entire AI opposition movement? Remember his origin story?
> I was digging in my parents' yard as a kid and found a tarnished silver amulet inscribed with Bayes's Theorem
Who planted it there?
"The lead opposition started with a Harry Potter fan-fic and is led by a high school dropout", it all makes sense... with enough science fiction bullshit we can convince LW that Yud is the anti-Christ created by the robot god
I think he is more like Cheradenine Zakalwe from The Use of Weapons. The perfect tool that looks like he should succeed by fails every time due to deep character flaws. That tool you send in when you want to look like you are on side A and fighting for it, but instead are hoping it fails and side B works.
> how does he think that would be an improvement over the robot god genocide?
The robots are going to torture 2^32 copies of your mind for all of eternity.
Dozens of the world's smartest message board posters used ~~remote viewing~~ Bayesian inference to predict that future, so this should worry you.
It's transhumanist leftover, apparently there's no point in living forever via brain upload if original selfhood doesn't transfer to digital copies.
I've also read here that Yud has written something to the effect that *if* consciousness isn't continuous (disrupted via sleep and such) and tomorrow morning's you is still you *then* all your copies are literally **you** as well, which one might describe as an attempt at transhumanism apologetics.
I’m no philosopher, and I know you’re just summarizing off hand, but that seems pretty unpersuasive to me. The leap from “maybe it isn’t the same you every morning” to “future copies of you should be given any (or equal) weight to you right now” strikes me as laughable.
I'd say it's more like in the course of reinventing religion from first principles they stumbled upon the need for a soul-like concept as a bridge to the techno-afterlife, and the allusion to the episodic nature of consciousness is just handwaving.
I mean honestly it's not obviously wrong, nor is it obviously right. There is no "right" or "wrong" here, there's just whatever you decide to believe (like any religion.) The problem is that Yud has a set of beliefs he would like you to share, and the end-result of slavishly following Yud's belief system seems to be that billions of people need to suffer and die.
Keep in mind that LW orthodoxy also requires accepting the fundamental reality of many-worlds (whatever that means) which implies that there are already many copies of you anyway, all of whom are you. So the movement generally tries to make people comfortable with this concept.
Those morons don't even know about the future AI that will torture all of them for being obnoxious dorks. Anything less wouldn't be aligned with *my* values at least.
No it isn't, Malthusianism is the belief that human population continually rises, faster than available resources rise, and this causes disasters. He did not in fact suggest murdering entire classes of people to reduce the population; he was in favor of abstinence. Which, while stupid (he was against birth control), isn't genocide.
Yud isn't Malthusian because he's a starry-eyed techno-optimist who doesn't believe that human population has limits, at least ones relevant to the current day--none of the longtermists do; that's central to their entire ideology. And their ideology proposes vastly increasing the population of humans (or human-like AIs) to enormous numbers.
Social Darwinism is sort of implicit in Malthusianism, but the latter term is usually used in the way the above poster does. That is to say, to refer to the idea that population growth is an existential problem. Yuds ideas, bad as they might be, are not in line with the tendency usually described as Malthusian.
In fact while Malthus definitely held views that we would now call Social Darwinist (the term had not been invented yet) his work is not merely Social Darwinist, but regular Darwinist too. Malthus was actually a direct inspiration to Darwin. This isn't really relevant to the argument though, it's just kind of a fun fact.
>As such, generic, classless/non-murdery malthusianism is a new construct by historians.
This is true. All of what you say is true, and Malthus was definitely a big influence on the policies that caused the famine, but that is still the definition that I have usually seen the term Malthusian in, and by that definition Yud is not Malthusian. His politics are similar to Malthus in the contempt he has for the poor, and his willingness to consign them to untimely suffering and death, as Right-Libertarianism and ultimately any politics that does not prioritize working-class interests inevitably does.
Which is to say that I don't think we really disagree with each other, we are just arguing about terminology.
That's not what they are discussing at all.
The closest they come is one of their theories is a rogue AI going all Malthusian, tho I'm pretty sure that's the plot of the Avengers movies.
The problem with malthusianism is that it is a word used most often nowadays by weird russia aligned western communists. The LaRouche/Caleb Maupin aligned people.
So using that word might have others side eye you a bit. (Esp if people hace followed a bit of breadtube (is that still a thing?))
Dont really have more to add to the definition discussion here just wanted to mention the current weird culture thing surrounding the word.
I peripherally follow breadtube, so I'm kind of curious if there's a good video about this usage? My exposure to Malthusian is mostly historians in the 80s debating Marxist vs Malthusian explanations for the rise of capitalism.
Not really, it is just a thing you often hear when the breadtubers are dunking on the weird tankies/nazbols/other weird people.
I noticed it when I think Thoughtslime and Sophie from Mars were making fun of Caleb Maupin. He and various other people in his surrounding just mention malthusianism a lot, esp compared to various other political discussions. (They also accuse people a lot of malthusianism, or malthusian thinking and it is one of those weird things).
Looks like Caleb (or people in his sphere) made up neo-malthusianism, and eco-malthusianism, it is used as tool to accuse others of being badness. Somebody on twitter talked about him (and various connections) [here](https://twitter.com/maupinafa/status/1545064258517467137)
Thank you! I think I listened to the We Don't Speak German episode on Maupin which featured Sophie from Mars, but not her YouTube video. Might check it out.
It is def more entertainment than actual deep political content. Which does fit the weird sort of larpy politics they are mocking. Will look up the we dont speak german audio myself. So thanks for mentioning.
Ah [this one](https://idontspeakgerman.libsyn.com/120-caleb-maupin-and-the-conspiracy-left-with-sophie-from-mars) I might have listened to that one at the time while doing other things.
You don't have to read the fanfic. That would've been too offputting to all the teenagers Yud attracted to read it as an outreach project for his anti/pro doomsday cult. Rationalists publicly floating these ideas has been rare but has been increasing in the last year or two, in proportion to their hysteria.
“There would be no shocking memories, and the prevailing emotion will
be of nostalgia for those left behind, combined with a spirit of bold
curiosity for the adventures ahead!”
The rationalists had a big #metoo in 2018. [Here's a thread.](https://www.reddit.com/r/SneerClub/comments/8sjxm9/serious_twitter_thread_by_someone_detailing_their/)
"We have found no ethical solution. Thus we propose eliminating the problem entirely by making sure nobody can observe the result."
I found this an amusing thought at first, but the more I think about it, the more it begins to sound like something rationalists would come up with in earnest in their desire to be seen as creative and contrarian.
I like how if world governments were even to take Yud at face value, they'd probably devise some incentives against building super-AIs other than World War 3, though Yud is glossing over that and is presuming that his fantasies should be like a reality wherein he's solely entitled to dictate the fate of the future. Other than denying their existence, I wonder how he reconciles the existence of his own blindspots with the conviction that he's the 2nd most intelligent possible intelligence.
What is being human? Are you human because of your body or are you human because of your soul? Where does your soul reside? What about your memories, your ego or personality. If you go to Heaven, without your corporeal body. Are you still human or just a spirit or soul? Would you consider yourself human still?
Honestly, if I really thought humanity - and the observable
universe - were at risk of total extinction before an imminent machine
god, “reaching the stars someday” would not rate mention on my list of
priorities. I would take it as a massive win if we so much as survived
until the next mass extinction event on our planet, lol.
Like even EY is doomposting he can’t stop being a starry-eyed
futurist.
I can’t believe the Comic Book Guy’s MENSA-futurism (“For you it will
be much less sex, for me much much more”) has real life counterparts now
via AGI fetishism.
He since deleted
the post, but in case anyone doubts that he really did write that,
the original is archived here.
His deletion message says he didn’t want anyone to think he was
advocating a nuclear first strike, so I guess what he probably meant is
that even if tensions were high and strategists were saying that if the
U.S. sent non-nuclear bombers into Russia or China to blow up their data
centers, there was a very good chance that would trigger a nuclear
retaliation, he would still think it was worth it to stop the greater
threat of a paperclip maximizer?
Sneerers: nobody is gonna kill any puppies
EY: because not killing any puppies will cause the eldest puppy to grow into the legendary wolf god fenrir and devour humanity, we must kill all the puppies except one
Sneerers: what the fuck are you actually talking about
It's more like this:
Sneerers: How many puppies would you be willing to kill to avoid gambling on some as yet undeveloped AI killing ten puppies?
EY: I would not kill any puppies, because it has no bearing on the creation of an AI. That would be irrational. Naturally, of course, I would happily kill over 99% of humans to achieve the same goal if necessary, because I believe the odds of the AI apocalypse are near 100% otherwise. That is very rational.
Sneerers: That would be a pretty fucked up thing to say in a non-rationalist setting, making this a good demonstration of why we sneer at rationalists.
true longtermism is killing billions of people now to prevent an apocalypse that you made up on the offchance that the descendants of the survivors eventually want to try going to space again
Dr Strangelove but Dr stands for dropout.
I’m gonna hate myself for asking this, but since when does Yud recommend genocide and how does he think that would be an improvement over the robot god genocide? I hope I don’t have to read Harry Potter fanfic to answer this question
they better not be using eigenvectors, or any dangerous array types to get to the stars!!!!
“There would be no shocking memories, and the prevailing emotion will be of nostalgia for those left behind, combined with a spirit of bold curiosity for the adventures ahead!”
EY getting even a modicum of mainstream attention is worse than the paperclip apocalypse
“Kill everyone on the planet except for a few people” is certainly an interesting solution to the trolley problem.
Honestly, if I really thought humanity - and the observable universe - were at risk of total extinction before an imminent machine god, “reaching the stars someday” would not rate mention on my list of priorities. I would take it as a massive win if we so much as survived until the next mass extinction event on our planet, lol.
Like even EY is doomposting he can’t stop being a starry-eyed futurist.
One day we’re gonna be citing:
E. S. Yudkowsky*
^(* Better known for other work.)
I can’t believe the Comic Book Guy’s MENSA-futurism (“For you it will be much less sex, for me much much more”) has real life counterparts now via AGI fetishism.
Eliezer is a crank
Okay, now he’s just a psycho.
so is this like the book of genesis but for futuristic nerds?
my god… the rationality….
He since deleted the post, but in case anyone doubts that he really did write that, the original is archived here.
His deletion message says he didn’t want anyone to think he was advocating a nuclear first strike, so I guess what he probably meant is that even if tensions were high and strategists were saying that if the U.S. sent non-nuclear bombers into Russia or China to blow up their data centers, there was a very good chance that would trigger a nuclear retaliation, he would still think it was worth it to stop the greater threat of a paperclip maximizer?
I would recommend looking into Universe 25 Mouse experiments conducted at the National Institute of Medical Health. *The Secret of NIMH*
“Mein Führer, I can walk!
[deleted]