r/SneerClub archives
newest
bestest
longest
105

Greg Egan’s 2010 novel, Zendegi, has a few scenes satirizing rationalists and their “ideas.”

Here are five of those moments:

1. A Rationalist’s Introduction

‘Are you Nasim?’

‘Yes.’

‘I’m Nate Caplan.’ He offered her his hand, and she shook it. In response to her sustained look of puzzlement he added, ’My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it.

[…]

‘Just give me your email address.’

‘Absolutely not.’ Nasim increased her pressure on the door and he started yielding.

‘You can always reach me through my blog!’ he panted. ‘Over-powering Falsehood dot com, the number one site for rational thinking about the future –’

2. Who Needs to Read Books?

‘Not compression for the sake of bandwidth,’ Mike explained, ‘compression to save the reader’s time. Abridgement. Like Reader’s Digest Condensed Books, but fully automated, and based on a rigorous scientific analysis of what readers will actually retain […] surely we can figure out what words can be omitted from a great slab of Melville or Proust without altering the impression that they leave behind. People are far too busy these days to indulge in rambling, discursive novels… but if they can just feel just as Prousty in two hours as they would have in eight, every word lost is time found.’

3. Fast Take-off is Inevitable

‘I have been invited to fund an enterprise known as the Benign Superintelligence Bootstrap Project,’ Churchland explained. ‘Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor, which will be armed with superior versions of all the skills the original possessed. The successor will then produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. Once this process is set in motion, within weeks – perhaps within hours – a being of truly God-like powers will emerge.’

Nasim resisted the urge to bury her face in her hands. However surreal the spectacle unfolding on the screen, there was, in retrospect, something inevitable about it. The uploading advocates who’d sold Churchland on an imminent digital resurrection hadn’t lost their critical faculties entirely, but their penchant for finessing away any ‘mere technical problems’ that might stretch out the timetable was, nonetheless, intellectually corrosive, to the point where the next step probably didn’t seem like such a great leap anymore: hand-waving all practicalities out of existence, transforming the cyber-eschatologist’s rickety scaffolding of untested assumptions into a cast-iron stairway to heaven.

[… Churchland said, ’]I am leaning towards putting my fate in the hands of an artificial God, for whom such problems will be trivial. The Benign Superintelligence will rule the planet with wisdom and compassion, eliminating war, disease, unhappiness, and of course, death. I am told that it will probably disassemble most of the material in our solar system in order to construct a vast computer that will exploit all the energy of the sun. Perhaps it will spare the Earth, or perhaps the Earth will be reconstructed, more perfectly, within that computerised domain….

There’s no point in fighting it, and the alternative would be far worse. Imagine if one of our country’s enemies did this first. Imagine the kind of despotic superintelligence that Al Qaeda would create.”

4. Fifteen-Years Later, Fast Take-off is Inevitably Even More Inevitable

By carefully studying the HCP data over the last few months, the Superintelligence Project had acquired vital clues that would allow it to construct a Class Three Emergent Godlet within five years.

‘And when that happens, what can we expect?’ Bello asked.

‘Within two or three hours, the planet will be entirely in the hands of the Benign Superintelligence. Human affairs will be reorganised, within seconds, into their optimal state: no more war, no more sorrow, no more death.’

‘But how can we be sure of that?’ Bello probed fearlessly. ‘Computers are capable of all kinds of errors and mistakes.’

‘Computers built and programmed by humans, yes,’ Esch conceded. ‘But remember, by definition, every element in the ascending chain of Godlets will be superior to its predecessor, in both intelligence and benignity. We’ve done the theoretical groundwork; now we’re assembling the final pieces that will start with the chain reaction. The endpoint is simply a matter of logic: God is coming into being. There is no disputing that, and there is no stopping it.’

5. Communicating with Rationalists

Nasim struggled to reorganise her tactics. How did you get through to someone whose entire world view had been moulded by tenth-rate science fiction? Empathy [was … out]; Caplan probably believed that the only consequence of being orphaned at six was that you tried harder than anyone else to reach the top of your class in space academy.

Imagine the kind of despotic superintelligence that Al Qaeda would create.

One of those perfect pieces of satire which will have now probably already been said by somebody. Bingo, thanks UN (This report is actually about AI not AGI, but lets not let that get in the way of things ;) (I didn’t read it well enough to see if they mention AGI as a serious threat, which doesn’t seem to be the case.))

Nice find. I'll also point out that many people online have repeated the claim that if OpenAI doesn't make AGI (for example, because it listens to the call for a pause on LLM development), then China will make it first and that would be terrible, which isn't too far off from this.
At least China has industrial/research capacity.
Which is why we should launch everything immediately, yes.
Of course not, all pf this is movie plot threats. Third rate science fiction.

Oh my god. This seems to perfectly capture so many of my thoughts and feelings about AI, in better words than I probably could have expressed them. How have I not known about this novel for over a decade? I will definitely check it out. Thanks for sharing!

Yeah Egan is exactly the kind of person I’d otherwise expect to be in with the rationalist crowd and I’m very glad that he’s not (cos his books are really good)

I think Egan has a better viewpoint on these issues in part because he tries grappling with the actual technical content of these ideas, as opposed to certain rationalist leaders, who ignore all existing research on a topic and philosophize their way into making strong claims without any real evidence. His posts on mathstodon also show that he's quite against current hype around AI (and is definitely against the rationalist discourse on it). I feel this is similar to the situation with Ted Chiang: if all you read from his work was the short story "Understand," you might think he's also a rationalist, but seeing his interview with Ezra Klein or his article in the New Yorker makes it clear that he has a significantly more thoughtful and nuanced view of the issues surrounding technology and capitalism than the rationalist community does.

And EY has said Greg’s Permutation City is his favorite SF book ever. That must sting a bit!

i mean permutation city is really really good, but EY really needs to read Le Guin lmao.

I feel like a lot of rats must have felt seriously hurt to see Egan coming for them. Like being stabbed by William Gibson or Bruce Sterling.

The guys who write SF know it when they see it, and most SF writers have a reasonable understanding of their place in the order of things. They build concepts for narrative reasons, even if it's hard SF, so they know exactly when something is warmed over tropes and creative fiction, because most of them have played with these ideas for decades at this point.
I bet a lot of them are specifically irritated that Yud is out here going full L. Ron Hubbard off the cyberpunk/singularity riffs they came up with.
He probably makes more a year than more than a couple people he's poached from Edit Past tense
Definitely. Yud has a cult with billionaire fans, and SF author is not a money-making career, even for the really good ones. In consolation though, someone posted a comment a while back about how Egan, a brilliant SF author who does publishable academic mathematics research in his spare time for fun, basically is the guy Yud wants to be seen as, so those excerpts must have stung.

Greg Egan is probably my favorite SF author of all time. It’s funny that so much of what the rationalists obsess about seems to be just derivative and overly serious adherence to ideas that Egan wrote novels about 30 years ago.

Abridgement. Like Reader’s Digest Condensed Books, but fully automated, and based on a rigorous scientific analysis of what readers will actually retain […] surely we can figure out what words can be omitted from a great slab of Melville or Proust without altering the impression that they leave behind.

…so, CliffNotes.

Why do that when you could instead use ML to "solve" this problem? To be fair, in the context of the story, the character is pitching it as a some personalized compression, that will abridge these books differently depending on who's reading it (and the pitch is happening as part of some silly game instead of being proposed seriously).
[Reminds me of this.](https://www.reddit.com/r/SneerClub/comments/yodmju/i_really_thought_this_was_a_brilliant_satire_at/)
[Oh, my sweet, summer child](https://acephalous.typepad.com/acephalous/2007/05/irtnog_by_eb_wh.html)
Thank you so much for linking that! It’s terrific and remarkably prescient.
The end result of basing one's ego entirely on one's intelligence: reading is a chore to be done to get a good grade rather than something done for pleasure in itself.

Is he spoofing eliminative materialism with “Churchland” too? I do like the BS (Benign Superintelligence) pun.
Can’t wait to read, this was prescient stuff in 2010, before Bostrom seduced Max Tegmark and Jaan Tallin and Stuart Russell parseltongued Hawking and Musk.

Yeah I think Churchland's name is very likely a reference to that.
I have on my bookshelves a 1990s textbook titled "The Computational Brain" by Churchland and Sejnowsky. Guess this is probably riffing off it?
I doubt he intended any hostility towards eliminative materialism, he said in [this tweet](https://twitter.com/gregeganSF/status/1580729967851503618) that he's a big fan of Dennett's *Consciousness Explained*: >Just finished rereading “Consciousness Explained” by @danieldennett, and while the title is overreach, after 30 years it’s still unrivalled in the clarity it brings to the subject & inoculation against/skewering of seductive fallacies (most of them sadly still peddled by others). In a [followup tweet](https://twitter.com/gregeganSF/status/1580776212204769285) he also said: >Seriously, anyone who has read and understood this book would just fall over laughing at all the nonsense that is (still) written about zombies, “Mary the colour scientist”, intrinsic qualia, etc. And when asked about David Chalmers' work, he [said](https://twitter.com/gregeganSF/status/1581037687112601600): >Nothing he’s written on the subject has been at all persuasive to me.
Awww man, that’s the lamest Eliminative Materialism there is! Seriously though, that book is not great and Dennet’s multiple drafts theory of consciousness is hilariously Cartesian.
Cartesian in what way?
It ends up relying on a similar process to the Cartesian theatre, namely, there’s a specific time these “multiple drafts” are “read” which falls into the trap of assuming there’s some separate entity receiving the sensory data. I think Metzinger’s concept of Phenomenal Presentation and the Ego Tunnel are much, much better developments of teleofunctionalism.
My understanding is that he specifically denied a definite time when the drafts were read, see the summary [here](https://hilo.hawaii.edu/~ronald/310/310-Dennett-MultiDrafts.htm) of how in cases like the [color phi phenomenon](https://en.wikipedia.org/wiki/Color_phi_phenomenon), Dennett thinks it's a *mistake* to make any sharp distinction between an "Orwellian" model where something was originally perceived correctly and the memory edited later, vs. a "Stalinist" model where there is a delay between receiving a stimulus and presenting it to consciousness so that edits can be made before it's perceived. As the summary says, he thinks there is no definitive "finish line" when drafts enter conscious perception.
That makes sense.
Good points.

People are far too busy these days to indulge in rambling, discursive novels… but if they can just feel just as Prousty in two hours as they would have in eight, every word lost is time found.’

The problem with comedies is that they waste time on jokes and humor, if we had an AI that wrote humorless comedies without any jokes them it would allow people to get through comedies much more efficiently.

as el sandifer pointed out, sam bankman-fried was completely correct: if you wrote a book, you fucked up

This is excellent. Everything is in there.