r/SneerClub archives
newest
bestest
longest
79

Stumbled upon this sub and spent some hours reading through some old posts and feeling all warm and fuzzy inside. Then I recalled a story and decided to make an admittedly tiny sneering contribution. Story needs some background, apologies for length.

Years ago I got hooked by arguments over the coming Singularity and AI-pocalypse. I read Nick Bostrom’s Superintelligence and got together with some friends to pick it apart. After contemplating the book for a while, I realized more and more how much of a garbled piece barely-even-thought it was. The extent to which it this confident-yet-confused garbage was celebrated stunned me a fair bit, since his book became the fulcrum that ultimately propelled me away from all that nonsense (after some more meanderings).

Some people in our group had a LW/MIRI/Rationalist background and were open to these criticisms. Someone had connections to a LW-aligned software dev turned millionaire who was hitting the Superintelligence pipe heavily, giving money, promoting etc. We orchestrated a conversation style meeting to test some of these critical questions on him.

So here I was, a 20-something sitting down with this older and arguably bright and successful dude, shooting my barbed questions and it wasn’t long until he replied: “Does not compile.” I blinked and figured I had somehow mangled the question, so I rephrased. “Does not compile” came the reply, like a malfunctioning computer.

The conversation came to somewhat of a halt as I tried to ‘compile’ whether he was actually telling me that he - a Rationalist - was unable to engage with a question that undermined his beliefs and that his only way of communicating this fact was to act as if he was a broken machine. He was. I was floored. I didn’t realize this frustration still dwelt within until I read this sub.

Question: Have you met people like this in an offline setting, talked to them face to face and witnessed what Rationality hath wrought - like when a person begins impersonating a computer to escape thinking? What’s your worst encounter?

PS. Years later, he’s still talking about AI turning us into fucking paperclips or whatever.

Edit: got confused when recalling CS terminology, he actually said ‘compile’, which is arguably even stranger.

Rationalism + neurodiversity produces an unholy offspring whereby logic is subsumed within the final argument: sorry, that’s just how my brain works.

now trying to work out who this is

Story needs some background, apologies for length.

Where is the rest of it? This fits neatly on one page.

And lol what a story.

But this sounds like somebody who learned to argue by watching Molyneux (not the game dev, but the white nationalist/sexist). He tells his audience to say ‘not an argument’ because it isn’t your job to create sound arguments in reaction to somebody trying to discuss things with you. This can (and has) been abused in the same way you mentioned above. And eventually you will notice this ‘type’ of person, there is a type of people (or sometimes a certain group of subjects with certain people) where you simply can’t reach them at all, and they either shut down by repeating the same shit over and over, or go into gish gallops (being drunk also doesn’t help).

(It also goes to show that you don’t need normal conversation abilities, nor coherent logic to be either succesfull, or a Rationalist).

E: Rationalwiki has an article about this style. To quote it ‘Stefan Molyneux, the creator of the phrase, wrote a book on logic that confuses logical validity with logical soundness (and contains numerous fallacies throughout)’.

>It also goes to show that you don't need normal conversation abilities, nor coherent logic to be either succesfull, or a Rationalist. I'll have a Q.E.D. with that.
... well today I learned that Peter Molyneux and Stefan Molyneux are different people.
And Peter Molydeux is a yet another person still!

You probably forgot the semicolon, it happens to everyone.

You’re actually very lucky he didnt start saying “fnord” out loud.

I miss the days I genuinely thought the worst kind of references people harped on too much was Monty Python.

I refuse to believe this is real, and yes, I get the irony.

What I wouldn’t give to be unable to compute (I touch computers for a living)

Like Soyweiser said, this is endemic to a certain style of internet debater, and is amplified when one person believes that they’re inherently smart and rational, like a computer, and that the people who disagree with them are not only wrong, but dumb, too.

I mean, they needed to reinvent the idea of good faith and taking the most charitable view of a question or argument as “steelmanning”, so it’s clearly something they struggle with it as a group when it comes to facing those with opposing views.

>steelmanning A podcaster I like has taken to using the gender neutral term "steelbotting" for this. Seem very appropriate here, in more ways than one.
What’s the podcast? I previously disabused podcasting but lockdowns and my increasing hatred of the BBC has made them seem more like a respectable version of radio.
Embrace the Void. Mostly interviews about philosophy and culture war stuff. Host is very earnest.
Lol we’re already mutuals on twitter anyway as it is

This reminds me of those Star Trek episodes where Kirk would break computers by talking to them for about five minutes.

This gives me copypasta vibes but I believe it. The thing where rationalists talk like Star Trek characters as a display of their rationality is something that wouldn’t be possible if the subculture was born anywhere but in America. I’m not in the Anglosphere so I never had the chance to get personally involved like that.

Somewhat related: back when I was seriously considering the 80,000 Hours thing I checked if there were any native Effective Altruists in my country and found a single guy trying to start a Facebook group. IIRC a tech worker in the capital, as you’d expect. This is in spite of Peter Singer having given lectures in fairly prestigious universities in recent years, so I guess either the cultural gap is too large for it to take or we’re not yet enlightened enough to receive the gospel.

Perhaps rationalism could only have been born in the US. Libertarianism is this way. The greatest imperial power of an age produces the bulk of the free market understanding, just world knowing, skull measuring wankers. Time was they were British. Members of the Tsunamis Are Good Actually Society live on the hill where the tsunamis wash up cool free shit.
Out of curiosity what blessed country is this that is free (or almost free) of rationalists?

Literally thinking of yourself as the computer

Even worse, a badly coded compiler. (Just a 'does not compile error message?' toss that 1960s piece of junk in the trash).
Would be funny if they said something like `ModuleNotFoundError: No module named 'logic'`.

Tangent:

I’ve told this story a million times now, but since you’re new I’ll tell it again

A few years ago, when I was a graduate student in philosophy of science, I met and got drunk with the wonderful Adrian Currie (his book Rock, Bone, and Ruin is one of my favourite history and philosophy of science books, and he plays a mean banjo). At the time he was working as a post-doc at Cambridge, at what he described as the rival to Bostrom’s institute over at Oxford. You know, existential risks stuff.

When I raised an eyebrow at how this otherwise super grounded guy could work on that kind of nonsense, he replied - in his mellifluously New Zealand accent - words to the effect that:

“Oh no, we’re a bit more sane”

I thought that was funny

While I studied philosophy in San Francisco, I graduated just before LessWrong launched and ‘rationalist community’ was a thing. I have stories of comp sci dudes with bad philosophical takes, though, but internet ‘rationality’ was always a thing I encountered via fellow ‘very online’ folks, never in person.

I have a CS degree and a hobbyist interest in philosophy, both of which I keep quiet in some circles to keep this Kind of Guy from coming up with insane takes in my DMs. Rationalists are honestly one of the least concerns next to the occasional "science justifies misogyny" guy or "guy who's cool with white nationalism, but with plausible deniability".
I must have been lucky, either by geography or time or both, but the comp sci bros at the time I knew them were less, like, anti-feminist or 'anti-anti-racist' but just had the same vain of 'dude, just thing about what you said and what that would entail.' Comp Sci friend of mine floated the idea one evening of using bee hives as a model for how to organize human labor and societies in general. I also graduated months before the web publication of the Nakamoto Bitcoin paper so fortunately missed evangelists of that until years later.

Oh the old paperclips thing that’s fun. Point out that it is a pretty good description of the dangers of unregulated capitalism and watch them squirm.

*"What kind of a monster are you?"* *"The Basilisk that was prophesied."*

did you ask “are you a human” and get him to pick all the pictures of boats

What criticisms of it were you making?

Gawd, it was like 6 years ago, please don't ask me to recall that crap. Their whole concept of intelligence as one scaling, substrate independent variable that can be increased or decreased is ridiculous when you realize they have taken a shaky-to-begin-with psychological concept grounded in biology and ontologized and decontextualized the fuck out of it to arrive at their *premise* alone. Intelligence doesn't "exist" in the way they assume it does, and it doesn't scale infinitely to achieve any result imaginable, this is pretty much ungrounded extrapolation based on the fact that they've read some sci-fi books that claimed it really really can (I liked the The Last Question as well, but it's not a blueprint for intelligence, brah). The lack of understanding of complexity and systems that runs through the LW-sphere weighs heavily as well. There's this complete disregard of the fact that what we call intelligence is something that emerges when a complex biological system interacts with a complex environment, it's a name we give to a complex process, and not a thing. Since you can't just "solve" complexity, they pretty much handwave it away. This is how they reach all the really insane stuff on brain uploading etc, since they're convinced intelligence is somehow "in" the brain, which throws away not just the body (kind of important you know), but the whole body-environment interaction. Actually reading and understanding some proper psychology would inform them that this is a child's understanding of "intelligence". Then there's the whole thing about goals which really grinds my gears, claiming a pretty much omnipotent agent would be unable or not interested in changing it's pre-programmed goals. Goals and goal-selection aren't unrelated to intelligence. The idea that while even humans with their comparatively ant-like intelligence can easily change their goals or even commit suicide, a literal god would continue to produce paper clips because someone programmed an utility function... like what the fuck, guys? You're saying it can do anything and everything better than us, but it *can't change it's fucking mind?* They're taking a sci-fi plot hole and using it as an actual argument. And if I think about indirect normativity and coherent extrapolated volition for even a minute, I will turn green and start smashing things.
What do these latter terms even mean?
That way madness lies. CEV is one of Yud's brain gems and I will seriously hurt my keyboard if I try to write it out, so here's Wikipedia to the rescue. Extreme trigger warning: >Coherent extrapolated volition is people's choices and the actions people would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."[\[17\]](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence#cite_note-cevpaper-17) > >Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study [human nature](https://en.wikipedia.org/wiki/Human_nature) and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[\[17\]](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence#cite_note-cevpaper-17) The appeal to an [objective through contingent human nature](https://en.wikipedia.org/wiki/Evolutionary_psychology) (perhaps expressed, for mathematical purposes, in the form of a [utility function](https://en.wikipedia.org/wiki/Utility_function) or other [decision-theoretic](https://en.wikipedia.org/wiki/Decision_theory) formalism), as providing the ultimate criterion of "Friendliness", is an answer to the [meta-ethical](https://en.wikipedia.org/wiki/Metaethics) problem of defining an [objective morality](https://en.wikipedia.org/wiki/Moral_universalism); extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity. Just imagine an AI saying: "Hey, you know what you guys would *really* want, if you were as smart as I am? Genocide." I don't know whether Yud would argue that this is a bug or gladly step up and be the first in line for the reaping?

https://twitter.com/vgr/status/1372403551285080066