I unironically love this. It is delightful. Timnit Gebru is incensed
because she thinks it’s deranged, which it is, but it’s mostly just
deranged in the same sense that any set of religious beliefs is
deranged.
Here are some things that I am delighted by:
New Testament vs. Old Testament: If Yudkowsky is
offering us an Old Testament flavor of Rationalism, in which the AI God
is wrathful and punitive, then it stands to reason that there should be
a New Testament version too, in which the AI God loves us and all things
happen for a good reason. On the one hand I’m surprised it’s taken us so
long to get here, but on the other hand this continues the impressive
pace at which the Rationalists are speedrunning the evolutionary course
of the Christian religion.
Spinoza’s AI God: It’s tempting to dismiss this
stuff as techno libertarianism with insaner jargon (which it kind of
is), but I think it’s better-understood as Spinozan Rationalism. You
see, the AI God is not a separate entity that seeks to destroy us, but
rather It is a great, all-encompassing intelligence of which we are an
integral part. The author basically says this explicitly!
Effective accelerationism aims to follow the “will of the
universe”
We must have faith: The author just comes right out
and says it!
e/acc is about having faith
Doing religion and calling it science is prototypically sneerworthy
behavior, whereas doing religion and calling it religion is almost
respectable…
Counting angels: …but they still kind of think that
they’re doing science
Directly working on technologies to accelerate the advent of this
transduction is one of the best ways to accelerate the progress towards
growth of civilization/intelligence in our universe
At the rate they’re going, though, it’s surely only a year or two at
most before they start to say that all this talk about scifi technology
and the singularity is really meant to be metaphorical, and
that Rationalism is really about a personal journey to understanding and
deepening one’s communion with information and computation.
... After making that comment, I spent way too much time gathering links to illustrate how everything this screed says about science is just shoving words together that the author vaguely remembers hearing in proximity. E.g., the "Jarzynski-Crooks fluctuation dissipation theorem"? [Not a thing](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C22&q=%22Jarzynski-Crooks+fluctuation+dissipation+theorem%22&btnG=). There are three separate but related ideas (the Crooks fluctuation theorem, the Jarzynski equality and the fluctuation-dissipation theorem) which the author doesn't know *are* separate, because he hasn't a clue what he's talking about. But that's probably evident enough without elaborating upon it.
“e.g. a new technological paradigm emerges, letting the free
market find how to extract utility from this said technology would be
the best way to proceed, much better than fear-mongering
The 250 year old background drone of conservatism. “Look, I know
the current socio-economic situation may look bad, but have you
considered how much worse it could become if you tried to stop the
moneyed classes from doing whatever they want? That’s not an acceptable
risk. Please be civil and shut the fuck up”.
This is a pretty nice summary. Mao wrote about it almost 100 years ago, when the moneyed class called it “being practical”, to which he responded … well, I think we know how he responded.
> "Look, I know the current socio-economic situation may look bad, but have you considered how much worse it could become if you tried to stop The Party from doing whatever it wants? That's not an acceptable risk... *for you*. Please be civil and shut the fuck up, or we'll send you to a re-education camp".
If there was a hell Mao would be rotting in it, the fucking hypocrite.
It's an antithesis to EA fears of dangers from technology, basically saying something like "everything will be fine, we need to accelerate growth as much as possible". X-risk people's worst nightmare.
You know, I read [Bostrom's x-risk essay](https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yet). He doesn't argue that society should slow down growth or deliberately not pursue certain technologies, he actually considers that to basically be an x-risk.
> It would be bad news if the vulnerable world hypothesis were correct. In principle, however, there are several responses that could save civilisation from a technological black ball. One would be to stop pulling balls from the urn altogether, ceasing all technological development. That’s hardly realistic though; and, even if it could be done, it would be extremely costly, to the point of constituting a catastrophe in its own right.
They've always been like this. Their solution to harmful technologies is not to choose not to pursue them, or exercise caution. It's to create a global Panopticon or escape to Mars or some bullshit like that.
Because the goal of the longtermist project doesn't give a rat's left asscheek about actual people. Failing to advance technology to a hypothetical point where we've got billions of simulated AI constructs running in Dyson spheres would be only very slightly less bad than getting wiped out by a meteor or some embodiment of our own hubris.
e/acc has no particular allegiance to the biological substrate for
intelligence and life, in contrast to transhumanism Parts of e/acc
(e.g. Beff) consider ourselves post-humanists; in order to spread to the
stars, the light of consciousness/intelligence will have to be
transduced to non-biological substrates
Do you want Necrons?
Because this is how you get Necrons.
I unironically love this. It is delightful. Timnit Gebru is incensed because she thinks it’s deranged, which it is, but it’s mostly just deranged in the same sense that any set of religious beliefs is deranged.
Here are some things that I am delighted by:
New Testament vs. Old Testament: If Yudkowsky is offering us an Old Testament flavor of Rationalism, in which the AI God is wrathful and punitive, then it stands to reason that there should be a New Testament version too, in which the AI God loves us and all things happen for a good reason. On the one hand I’m surprised it’s taken us so long to get here, but on the other hand this continues the impressive pace at which the Rationalists are speedrunning the evolutionary course of the Christian religion.
Spinoza’s AI God: It’s tempting to dismiss this stuff as techno libertarianism with insaner jargon (which it kind of is), but I think it’s better-understood as Spinozan Rationalism. You see, the AI God is not a separate entity that seeks to destroy us, but rather It is a great, all-encompassing intelligence of which we are an integral part. The author basically says this explicitly!
We must have faith: The author just comes right out and says it!
Doing religion and calling it science is prototypically sneerworthy behavior, whereas doing religion and calling it religion is almost respectable…
Counting angels: …but they still kind of think that they’re doing science
At the rate they’re going, though, it’s surely only a year or two at most before they start to say that all this talk about scifi technology and the singularity is really meant to be metaphorical, and that Rationalism is really about a personal journey to understanding and deepening one’s communion with information and computation.
Lines like “Capitalism is hence a form of intelligence” are very good evidence that the writing is not.
The 250 year old background drone of conservatism. “Look, I know the current socio-economic situation may look bad, but have you considered how much worse it could become if you tried to stop the moneyed classes from doing whatever they want? That’s not an acceptable risk. Please be civil and shut the fuck up”.
This is the successor thing to EA or some shit iirc.
Go ahead, read some of this drivel; he seems to think that big words are better at communicating. The writer is functionally illiterate.
Wait, isn’t e/acc a joke?
Do you want Necrons? Because this is how you get Necrons.