r/SneerClub archives
newest
bestest
longest
17

There seems to be a connection to all of these since they all get mentioned a-lot on this sub but that’s all I know. All I know was some stuff I read on ratinolwiki on Less Wrong (And I didn’t read much of their article) and Less wrong calling ratinolwiki “Regressive leftist.” Could someone clue me in on the context

You can trace most of it back to the blog of a GMU libertarian economist named Robin Hanson who publishes on a blog called Overcoming bias. (I mean, you could trace it back even further to the older and quintessentially American school of libertarianism, but I have to start somewhere.) The idea is that a number of biases are obscuring our ability to reason and think rationally, and we should recognize them and steer clear of them in our pursuit of truth. At least in theory. In practice, the idea is that a number of biases are obscuring our ability to reason and think like Robin Hanson, and we should recognize them and steer clear of them in order to agree with him. This is a common pattern in the community that you’ll learn to recognize.

Eliezer Yudkowsky was a former regular in that blog. At some point, and largely due to Yudkowsky being obsessed with AI risk while Hanson didn’t buy much into it, EY split off and made his own website, LessWrong. It basically built upon the very solid foundation laid by Hanson-like people, and refined the art of rationality (aka the art of thinking like a rationalist) to the point that EY got a huge following. Lots of very rational discussions were had there, and the common themes were about reconstructing all of philosophy from first principles, obsessing some more about AI, and basically shunning any kind of discussion or tradition of thought that didn’t involve their precious tenets of rationalism. That involved a peculiar mantra: “Politics is the mindkiller.” A few events of note happened there, such as people getting freaked out by Pascal’s wager, or the emergence a bizarre techno-libertarian cult advocating for things like a CEO-like god-king, race realism and the abolition of modern democracy, aka neoreactionaries (NRx). EY ended up having to tell them to fuck off. At some point the community tapered out and died like most communities as the huge social media websites started centralizing all discussion.

SSC was born from the comments of LessWrong, by a user named Yvain (also known as Scott Alexander - it’s a pseudonym and he doesn’t wish to make his identity public). SA saw that many people were very eager to kill their own mind, and he started a blog that’d incorporate EY’s themes, plus politics. Crucially, although he’s always described himself as “left-liberal”, he didn’t mind the presence of neoreactionaries in the comments section, so they flocked to his blog in droves. A subreddit was created about the blog, and neoractionaries abounded there as well.

At some point there were too many people posting about the IQ of black people even for Scott’s friends to bear it and he was advised to repudiate them as EY did. He wrote a long tear-jerking blogpost about how it was all the fault of people who called him a racist for having racists in his comment section, and implored right-wingers to leave, which they obliged and went on to found their own subreddit called /r/TheMotte. TheMotte is ostensibly about having calm, good-faith and rational discussion about politics but that apparently involves constantly bringing up of the IQ of black people and the evolutionary psychology behind women not being carbon copies of TheMotte for some reason.

So there you have it. Robin Hanson is about how bias prevents you from being rational and thinking like Robin Hanson, LessWrong is Robin Hanson + obsessing about imaginary god computer, SSC is LessWrong plus politics, and TheMotte is SSC plus the IQ of black people. This is what people like to call the libertarian to fascist pipeline.

This is a good genealogy of internet "rationalism," but internet rationalism itself is really an offshoot of transhumanism/singularitarianism and LW's rationality material was always a front for the skynet stuff. I'd trace it back to guys like Kurzweil, Minsky, Moravec, Vinge, etc. Or if you wanted to get really historical (and spicy), Marinetti and the Italian futurists.
Yudkowsky is a big influence on the whole thing, though. Hanson may be genealogically the founder, but it is definitely Yudkowsky who built the "rationalism". He's a low level grifter - never worked a normal job in his whole life. Much more strongly motivated than others who can simply do some kind of a job or already have enough money. His career is basically this: he was hired (as a single person contractor) to write some trading software (during dot com times), couldn't do it but started working on a programming language, couldn't do it either moved onto AI. Basically just escalating to avoid quantifiable failure that would lead him to conclusion that he may not be as smart as he thinks he is. In that "pay us money to make an AI" griftosphere^1 there's people with PhDs and/or sometimes, very rarely, actual working software that's doing something useful. So it didn't work too good, until he ran into some skynet freakout guy, forgot his name, talked to him, etc. Now Yudkowsky could just claim that he's the only one working on AI that wouldn't kill us all and everyone else would kill us all. That by itself also wasn't that good of a grift, hence the whole rationality shtick where he has to teach people how to think so that the obviously stupid idea of paying some random "hobo but on the internet" to make an AI, would make sense and be the most rational thing to do. Basically they had to reinvent how cults operate, in the latter stages by literally researching and copying from actual cults, down to making various rituals. footnote: ^1 AI grift is similar to free energy grift; just because laws of physics do not prohibit AI, doesn't mean folks collecting money to build one, back in 2000s, weren't very similar.
Someone had to lay the foundations of the ideology -- that was people like Kurzweil et al. Yud just popularized it through Harry Potter fanfiction. As far as I can see, his main contribution has been in the form of tactics, i.e. recruiting via "rationalism" like a Scientology front group. This is not insignificant of course, but the enterprise would exist without him.
I mean, specifically "rationalism" as a phenomenon, the way they think, the way they talk, the way they'll twist absolutely anything into an example of themselves being right, even a case of themselves having been as wrong as flu whataboutism about covid in an email (the Scott Aaronson one). Yeah, the glorious robot god and so on - but they didn't invent that, that's old in the extreme and goes back to all religion. edit: Basically this whole extreme normalization of the most transparent and cringe inducing rationalizations as the way of "rationality" and knowledge. This whole style of just rationalizing whatever the hell you're feeling that morning.
>His career is basically this: he was hired (as a single person contractor) to write some trading software (during dot com times), couldn't do it but started working on a programming language, couldn't do it either moved onto AI. Basically just escalating to avoid quantifiable failure that would lead him to conclusion that he may not be as smart as he thinks he is. This is nonsensical, where are you getting this from? Eliezer was a teenager in the dotcom 90s. From the moment he appeared in the transhumanist community around age 16, his central issue was AI singularity, and it remained such for about ten years, until the turn to "rationalism". Around 2001 he tried designing a language called Flare, specifically for the purpose of programming AI, but it was just a brief episode. He became the chief researcher at his own AI organization around the age of 21 and he has been that way ever since.
Go read his own “autobiography” (written at the ripe old age of 21 or so), come on. I am including the trading software because it is relevant - with zero experience and as a teenager he could just talk someone into paying him to do tech work that doesn’t materialize. Coupled with delusions of grandeur this quite naturally gets you into the AI grift scene of the time. Some Terminator movie plot later you get the FAI and everyone else is suicidal thing.
What early 2000s "AI grift scene"? Eliezer's little institute was alone in working on AI safety (and certainly in working on safety of superhuman AI). Since his mid-teens he wanted to work on superhuman AI; that he got to do so full-time, from the age of 21, might be regarded as evidence of a miraculous ability to invent a new social niche for oneself out of nothing, whether or not one agreed with the aims.
Here's the "autobiography" i was talking about. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html There were conferences where other AI so called researchers (for example Ben Goertzel) congregated and tried to milk investors for funding. And I describe how Yudkowsky's entry to this scene, and certain competitiveness as well as his bumping into another fellow whose name eludes me at the moment (but I'll either remember or dig up a reference), led to his creating his own brand with his angle of "other folks are going to kill us all!". Seems like you know all that perfectly well, given that you're now shifted to "alone in working on AI safety". Which is not even how it started, it started as the typical AI grift, with perhaps more delusions of grandeur with regards to the superhuman part of it. Then, yes, it had to differentiate from the other folks (who got PhDs and some, I presume, might even be able to write a checkers AI). Claiming that other folks would kill us all because they don't understand how rapidly AI improves itself or some other nonsense, wasn't (given existence of, you know, pop culture) as creative as it was sociopathic towards their little society of grifters. Not something any other one of them done up to that point, but humans behave in certain ways and throwing your entire "tribe" under the bus, metaphorically speaking, isn't really likely to be done even if in rational self interest. > that he got to do so full-time, from the age of 21, might be regarded as evidence of a miraculous ability to invent a new social niche for oneself out of nothing, whether or not one agreed with the aims. Well, he's definitely talented at talking people into giving him money, I'll give you that.. Inventing whole new anything though, just no. Go watch Terminator 2 or something. edit: this also goes for the acausal nonsense (basilisk for example), it is nowhere near as culturally unique as you think it is.
You’re being unnecessarily combative, please slow down
>You’re being unnecessarily combative, please slow down Was this a reply to u/dizekat? If so you have a truly bizarre and idiosyncratic working definition of "combative". I'd very much like to see you delineate it explicitly.
How about this way?
I realized who you are, hi. As for what you're saying, you seem determined to be as hostile as possible to the young Eliezer, I don't know why. His history makes perfect sense as an idealism that evolved. He grew up on sf and transhumanism, saw the natural human condition as a dystopian one that could be remedied by superhuman AI, made that his aim. Originally thought that superhuman AI would naturally gravitate to the objectively true ethics, later conceded that values are contingent, refocused on creating "Friendly AI". That's what happened.
What happened in reality is a cult that harms real people in real ways. There are people donating all their spare income to the cause, there's people in the cult being mentally abused. Then there were people harmed by the general approach to thinking and taking nonsense seriously (the basilisk). There's nothing good about any of that, except maybe that it is a less successful cult than say scientology.
>I'd trace it back to guys like Kurzweil, Minsky, Moravec, Vinge, etc. Minsky?? Why do you include him? I'm genuinely curious.
> he was advised He was "advised" in the way the Italian mob advises you what a shame it would be if something happend to your shop.
Yeah people called his work, doxed him [Technically somebody who knew him irl just dumped his real name, and linked it to his nickname, not sure how much doxing that is] (his old livejournal also did that btw, so it is one of those open secrets (And he also published his pictures in the past, which also makes doxing a lot easier, not to say that trying to dox him is good, and I clearly dont' support it, nor does the subreddit iirc)). Another small addition, the creation of themotte wasnt the first time SA tried to stop the neoreactionaires taking over his comment section. He also banned comments using the word neoreaction for a while. (Discussing iq and wanting a god ai king to rule humanity was still allowed however, so it wasnt that effective is just forced the fash in hiding not away. A common theme in the rationalist world, bigots are allowed as long as they are not too openly mean. Lot of mra/pua stuff as well on old lesswrong, for example, but at least yud banned that eventually, but not ssc)
If you're talking about Scott "Alexander", he used to post under his real name / is still referred by his real name from the ancestral blogging grounds, i.e. if you know the Hanson/Yudkowsky origin of SSC / read their blogs back when, you know his name.
SA is Alexander yes. And i didnt read it back then. I dont think the other Scott is really a rationalist blogger. He seems to me more like a brilliant computer scientist with a few rationalist friends and a few bad ideas.
You’re also assuming a lot of ignorance on the part of usernames I recognise from much further back than yours, so slow down on that too
Sorry, i didn’t mean it as ignorance i just meant to agree just how much not a secret the identity is. As far as SA getting fired one day, ever since he was suggesting “pockets of survivability” against SJWs in tech workplaces, I honestly don’t care - he is impacting other people’s employment since you can’t exactly anonymize being a woman or a minority.
>He was "advised" in the way the Italian mob advises you what a shame it would be if something happend to your shop. There's no evidence there was any mob threat.
Dude people were name dropping him in this subreddit.
This didn’t happen beyond a few isolated incidents that were immediately clamped down
And? Many people's names are available publicly. That does not mean there has been a mob threat.
[removed]
when come back, bring cites
I don't see how I'm being unserious here. Scott tried to [Streisand](https://en.wikipedia.org/wiki/Streisand_effect) himself. That people have mostly played along with this little charade only shows how polite those on SneerClub truly are. However, the mere fact that most people know, and have always known, his real name does not lend any credibility to the theory that someone harassed him or threatened him in any way. If you have any points to make to the contrary you should be able to reply to me in a rational and logical fashion instead of throwing around ad hominems.

I did this a while back for academic philosophers

It has a few different pieces of information and perspectives than the /u/wallofsneer

They are all right-wing tech geeks that are up their own arses.