r/SneerClub archives
newest
bestest
longest
16

Apologies to /u/queerbees if I’m hijacking this title format.

Rationalists pride themselves on being on the cutting edge of technology and intellectualism. As Scott Alexander writes in a blog post, “We’re almost certainly still making horrendous mistakes that people thirty years from now will rightly criticize us for. But they’re new mistakes. They’re original and exciting mistakes….”

This is the diametric opposite of reality. They have not, intellectually speaking, made any fundamental advances past perhaps Victorian thought at best.

Here is a brief history on the early modern origins of “Rationalism.” Rationalism is fundamentally based on the concepts of capital-P Progress, rational planning, and legibility, which are all conjoined. During the early, as well as the late, modern era, the teleological concept of history embedded within Christianity was secularized (or only partially in the case of Hegel’s heretical Christianity) and transposed onto political economy and technology. This ultimately reached its logical conclusion in Whig history and its anthropological counterpart, cultural evolutionism. Dispensations were altered to “stages” of history, which was driven by some kind of teleological or orthogenetic force, from Adam Smith’s schema (Hunting -> Shepherding -> Agriculture -> Commerce) to Auguste Comte (magic -> religion -> science) to LH Morgan (savagery -> -> barbarism -> civilization) and many others. The rationalist bolts on transhumanist ideas about a new stage, or dispensation of history, culminating in the Singularity, the end of history. Hence all the pronouncements of communism AGI in 20 years.

Of the various sorts of Progress posited by modern thinkers, some believed that the teleology was not necessarily fixed, but could be brought under the control of humans through rational planning and scientific management. This began as an ideology referred to as “Improvement.” The Scottish Enlightenment emphasized that a core element of human nature was the innate drive toward improvement. “Improvement Trusts” were established across Britain to speed up the evolutionary process of the nation. Rational planning was introduced across various domains: Roads, urban planning, agricultural techniques, etc. Much effort was devoted to creating “Improved” persons. This was in part accomplished through the specialization of institutions into specialized spaces – jails, asylums, workhouses, etc. It also became a fixation among the proto-entrepreneurs of the “middling sort,” culminating in the self-help genre initiated by Samuel Smiles, as well as Benthamite hedonic calculus, culminating in neo-classical economics. The phenomena of self-“optimization” and “life hacks” beloved by Silicon Valley are Smiles for the 21st century. The failure of rational planning can be explained away as a failure of the individual spirit, i.e. plebs who have not self-optimized to a proper level of rationality. The true horrors of the ideology of Improvement, however, came about with the advent of eugenics (relabeled as “HBD” by rationalists), in which humanity would re-engineer its own biology to propel itself into the next stage of history and become true ubermenschen. This is laid out in biologist and eugenicist Julian Huxley’s coinage of the term transhumanism.

The tool to make this happen is legibility. For rational planning to operate, the world must be legible. Thus, anything outside of legibility must be reshaped. Even before Improvement had been formulated as an ideology, the early demographer and political economist William Petty had set a blueprint for its rhetoric and political machinations. Petty had been commissioned by Oliver Cromwell to survey Ireland in order to enclose and parcel out conquered territory. Petty blamed Ireland for its lack of development, essentially casting them as barbarians. This attitude would be reproduced as well in the doctrine of terra nullius, reproduced in colonial projects across the globe. Legibility was also central to improvement. Surveillance of the undesirables was key to turning them into Improved persons or removing them as necessary. Returning to Bentham (and Foucault), a society-wide Panopticon was necessary. Outside of institutionalization, there was a fixation with “rookeries,” or slums in urban planning. One of the reasons was that the back alleys and illegible design of rookeries created a breeding ground for illegal activity, a criminal miasma. On the personal side, there was the drive to make the self legible, which can be seen at least as early in Ben Franklin’s moral accounting of his own virtues and vices in a literal ledger. The legible self was an Improvable self. We see this playing out again in the neo-liberal era of international “development” projects, Effective Altruism, and gentrification. The irrational plebs are not legible to the rationalist and can only be made so through some bastardized forms of bio-psychology. Expulsion of the proles from city centers via gentrification is actually a good thing because it Improves the city and cleans out our 21st c. rookeries. Charity is only useful insofar as its results are legible in a utilitarian framework, hence EA.

This does not even cover all the ways in which “Rationalism” is merely a repetition of musty tomes of the past, merely a broad outline.

Further sneering:

Wes Forsythe, On the Edge of Improvement: Rathlin Island and the Modern World

Foucault, pretty much all the shit

John Gray, Black Mass

Julian Huxley, Transhumanism

James C. Scott, Seeing Like a State

Sarah Tarlow, The Archaeology of Improvement in Great Britain, 1750-1850

This is an interesting critique, but like many critiques of rationalism, it emphasizes how the mindset developed rather than whether it is mistaken. Since you’re focusing on the historical and psychological factors behind rationalism -teleology and legibility, respectively- what if I were to ask you to deal with the notion of a hyperstition (a term from the long defunct CCRU, sort of postmodern fellow travellers). Nick Land writes “hyperstitions by their very existence as ideas function causally to bring about their own reality… transmuting fictions into truths.” If the belief in some kind of teleology immanent to history drives a mass push towards legibility and improvement, does teleology effectively “bring itself into existence”? This seems similar to what you’re arguing, but changes the implications.

Now, of course, Nick Land is evil. However, his work engages with the same stuff you’re talking about, and comes out in favor of the free market as A REPLACEMENT for thought -or rational planning dependent on legibility- a kind of vast experimental process, accelerating towards something grim and horrible. You could see accelerationism, or right-accelerationism, as rationalism without teleology or legibility, only runaway capital. “The ‘dominion of capital’,” he writes, “is an accomplished teleological catastrophe, robot rebellion, or shoggothic insurgency, through which intensively escalating instrumentality has inverted all natural purposes into a monstrous reign of the tool.” According to Land, this still gets us eugenics, transhumanism, and runaway artificial intelligence, just a darker variant of each. “The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.” The only way, according to Land, through is to align ourselves with the forces beyond our control. As he puts it: “Garbage time is running out. Can what is playing you make it to level-2?” Even commitment to improvement is replaced with commitment to the destruction of that which cannot survive. I think the problems with rationalism are therefore deeper than devotion to teleology and legibility, and none of these beliefs seem to depend on either. Things just get darker when you remove them from the picture.

There are many critiques to be made in terms of the empirical grounding of life extension, cryonics, neuroscience, evolutionary psychology, etc. which could be the subject of future sneerquences, though many have already been made without the reference to rationalism. As for Land, the idea that we can transform ideas into reality is pretty banal, so I would imagine there is something more to hyperstitions, but I can't comment on that. Accelerationism still assumes a "line" of history, i.e. a teleology, that we can push forward, or accelerate. This is exactly the same logic as any other modernist concept of history. The point about legibility is interesting, however. One of the defining characteristics of neo-liberalism according to Philip Mirowski is the conception of the market as a "super-information processor." Thus, individuals may make some problems legible to themselves, but the overall operation of the market is illegible. In that sense, there is an interesting parallel between the market and the post-singularity AGI in that both are rational but illegible to puny mortals while ensuring efficient economic allocation.
[deleted]
>The emphasis on trying lots of stuff and allowing what doesn't work to be destroyed through reality-testing - rather than on trying to figure out in advance what does - makes it cybernetic (like natural selection), not teleological (like orthogenesis) IMO. Land's version of it is pretty incoherent to me. There is a global "process" of capitalism that we can "accelerate," but at the end of it is just some kind of Lovecraftian horror or something. >There's debate as to how much James C. Scott borrowed from Hayek Not much I don't think. http://crookedtimber.org/2007/10/31/delong-scott-and-hayek/ http://crookedtimber.org/2010/09/10/scott-versus-hayek/
> Accelerationism still assumes a "line" of history, i.e. a teleology, that we can push forward, or accelerate. This is exactly the same logic as any other modernist concept of history. Do you assume this is definitely wrong, and if so why? I suppose it depends how you interpret the word "teleology", but if you define it purely in terms of a tendency to end up in some preferred state without the assumption there's any entity that has that state as a conscious goal, then we can see types of teleology in situations like the "attractors" physicists find in different dynamical systems, or convergent evolution in biology, which may represent more complex types of attractors (look at the way ants, bees, termites and naked mole rats all converged on very similar colony structures, for example). So it seems like an at least somewhat plausible hypothesis that if we could somehow get a peek at alien civilizations or parallel histories of the Earth, we would see that similar material constraints (particularly those depending on what technology is available) might have at least a statistical tendency to cause convergence on similar types of societies, similar social arrangements and dominant ideologies and such.

Sorry for the thread necromancy but I only recently came across this during a sort of “sneerclub wiki walk” and I found it a really interesting take.

I think a lot of it can be explained as a meta-contrarian dynamic, like so:

  • Unsophisticated position: naïve optimism about technology and rational planning (19th century Whig history, 20th century High Modernism, etc.)
  • Contrarian position: “actually the world is a lot messier than that and you have to reckon with a lot of subtle local effects, biases and power imbalances or your nice rational planning will make everything worse” (leftism since the late 20th century, postmodernism, James C. Scott)
  • Meta-contrarian position: rationalists rediscovering the joys of planned solutions, up to and including transhumanism, eugenics and utility-maximising polycules

The part you’re criticising – the part where they go so far into meta-contrarianism that they loop right back to the original view from 1893 and refuse to see its problematic aspects – is then caused by the original position lining up neatly with what they wanted to believe anyway. “We engineers can figure social problems out with our high intelligence and ignore the protests of the superstitious masses” is obviously an appealing idea to the kind of people who are drawn to SSC. Offering them a way to believe it guilt-free? Irresistible.

What do you do or read if you actually think like this?

[deleted]

I'm using it fairly interchangeably with transhumanism, so that was unnecessarily confusing, true. X-risks are an instance of trying to bring the teleology of history under human control. What's at the end of the tunnel? A paperclip cataclysm or FAI utopia?
Contemplating this caused me to realize explicitly something I had previously believed implicitly, which is that I've always seen Rationalism, at least as Yudkowsky envisioned it with Less Wrong etc., as a corrective aimed more at irrational tendencies within the preexisting transhumanist/singularitarian community than the general public (e.g. the idea that superhuman AGI will automatically be benevolent, because intelligence is intrinsically good or something). Which makes sense, since that was his social circle. This is probably a big part of the reason I regard his work in a more positive light than SneerClubbers do; it was a palpable improvement on what I'd previously encountered in its genre. >X-risks are an instance of trying to bring the teleology of history under human control. People try to affect the course of history all the time. Most fail, but to suggest that we shouldn't even try would be kind of nihilistic.

How do you distinguish between a teleologist and one who merely observes the phenomenon of (lowercase-p) progress in various historical trends and attempts to extrapolate them into the future, which is a perfectly reasonable thing to do? The only answer I know is that a teleologist perceives evidence of agency or purpose in the workings of history. I don’t see much evidence that present-day Rationalists are especially susceptible to that error. You can still pattern match features of their ideology to teleological belief systems like Christianity, but that’s not a very strong argument by itself; you could do the same thing with e.g. global warming: “Clearly these so-called ‘climate scientists’ are merely dressing up Christian apocalypticism in secular garb…”

Charity is only useful insofar as its results are legible in a utilitarian framework, hence EA.

You don’t have to believe that only the legible results of charity matter to believe that optimizing the legible results is an experiment worth trying. I assume most charitable giving is still happening outside of the EA framework, in which case, if there are important but illegible purposes to non-EA-style charity, the diversion of resources towards EA probably hasn’t been big enough to damage them all that much. And anyway, what alternative course of action does the observation that not all effects of charity are legible even suggest? You can argue against particular EA recommendations on the object level, but then illegibility is as much an impediment to you as it is to them.

I would say that extrapolating current trends indefinitely is one of the foibles of futurology, but not necessarily teleological. You start getting into that with things like the law of accelerating returns and the technological determinism inherent in the assumption that AGI will definitely happen. I would bite the bullet that some of the discourse around global warming is teleological, or at least recapitulating apocalypticism or catastrophism. Climate scientists present a wide range of scenarios, although I find the social scientific prognostications (say, GDP loss from global warming) to be something taken with a dump truck full of salt. When it comes to the r/collapseniks, they're definitely crossing over into apocalyptic territory, where it is predetermined, while the concept of "collapse" is increasingly being [called into question](https://savageminds.org/2010/03/16/questioning-collapse/). As for EA and legibility, my argument is not whether it is good or bad, just that the connection exists. I have problems with the EA charity rankers, but I do think that NGOs definitely need more transparency which some EA orgs provide. My point here, though, was to connect EA to more general concepts rather than to evaluate its effectiveness.