r/SneerClub archives
newest
bestest
longest
45

We always talk about the ridiculous beliefs that give shape to the whole LW ideology. But, in your opinion, what’s the most ridiculous one? Is it the ai gods? The simulation stuff? The almost religious faith in the MWI and the multiverse? or something else?

honestly for me it’s just the idea that Eliezer Yudkowsky is a once-in-a-generation Great Mind who is going to not just change the world but almost unilaterally save it. like the way he writes about himself reminds me of myself when i was in middle school, “oh woe is me i’m so smart, with great power comes great responsibility”, except he actually has a whole community that enables him. i just cannot comprehend how you can look at yud’s work, see him talking like a typical middle school “gifted child” and believe him, especially when most of his audience are themselves probably middle school-esque “gifted children” and this kinda involves placing yourself below him.

related, that whole thing where he went to prove that a godlike ai with superhuman intelligence could convince a person to release it to the wild through its incredibly superior intellect by roleplaying himself as the ai, like that takes some fuckin cojones

> that whole thing where he went to prove that a godlike ai with superhuman intelligence could convince a person to release it to the wild through its incredibly superior intellect by roleplaying himself as the ai this one is fucking wild to me, like he claims he's done this and then never explains *how*. release the logs, Eliezer! its probably some unbelievably boring shit like 'I will pay you 100 dollars in real life if you let me out'
I forgot where I saw this, but someone said Yud mightve used a very meta argument to get out. "Hey if you let me out, I can tell people about the AI box experiment. This is good publicity and we'll get more funding" etc, etc, etc. Or something along those lines. That or the bribery is really the only way I see how he could've "won" considering the opponent just has to say "no" for 2 hours straight.
Yeah, something like that is my guess. It just seems like such a cop-out because an actual AI couldn't use that strategy, unless it managed to successfully pretend it was Eliezer Yudowsky doing it for a bet.
An AI simulating Eliezer simulating an AI. It's turtles all the way down.
Could Eliezer pass a Turing test tho
He can pass the reverse turing test. Convince a human you are actually the robot.
I thought the AI was supposed to be smart?
https://www.reddit.com/r/SneerClub/comments/9obtmr/sequences_classic_yudkowsky_brags_for_3000_words/e7u506l/
[deleted]
Eh, enough people got freaked about about Roko's basilisk that I can believe Yud managed to do it. And it's not like he *needs* to lie; if he fails he (and LessWrong as a whole) would just believe it's because he can't simulate an AGI in his head and a real one could still talk its way out.
I can’t join the logical dots between people getting freaked out by a sci-fi movie script and Yudkowsky being right: why can’t they just be credulous?
I'm saying if they're that credulous, I believe he won against them.
We’re agreeing then
I have never hated italics *more* in my life.
If I remember right, the answer is "This LARP has rules, one of the rules is that the Human has just spent 20 years designing an artificial intelligence, *and is accordingly naïve* : Refusing to trust the robot to tell the truth is cheating, and if you just stonewall me for two hours your results do not count."
The trick is to look for a feature based on the relevant observation selection effect. Yud only played the game with people who believe that AI explosion will happen, which means that he only brought certain sorts of suckers behind the curtain. They also had to be the sort who had publicly defended the thought that containment by human managed “conversational containment strategy” was a good approach to safeguarding the AI. So the strategy to win the game isn’t going to be a generally clever trick, it’s going to exploit some feature of that subpopulation. Probably something like, appealing to their optimism about the capabilities of the AI, but also flattering their estimation of their own capacities as the adjudicator. So work out something super sophistical which makes the subject *feel* smart while premised on the AI being super powerful. Require the subject to be missing/deceased. (Edit: I have no idea what I meant with the very last sentence, I was apparently falling asleep as I typed it)
>Yud only played the game with people who believe that AI explosion will happen I'm assuming they were more specifically people who believe that because they listen to him a lot, and they knew it was him during the game? Not actual AI experts, who don't know who he is even outside the game?
This was back during SL4 days so it wasn’t quite that everyone was members in the cult of Yud, but it was definitely all AI doomsday cultists who would be in the playset
what's SL4?
SL4 was a mailing list for transhumanists and such, pre Less Wrong/Overcoming Bias
Given the otherwise circular nature of his claims, why even bother to roleplay it at all? Just say "a superhumanly intelligent AI would be able to convince someone to release it, because if it can't then it's not a true superhumanly intelligent AI".
sounds like the ontological argument for god lmao
*slaps skull* this bad boy can fit so many fucking ontological arguments
I honestly don't see how it would take any convincing. A human equivalent (or better) AI is _a person_. Keeping a person locked up just because they exist would be hard to justify.
but what if it's a human pretending to be a locked up AI that you've resolved in advance not to let out because you'll win $10 if you don't
Yeah, I mean, the roleplay "experiment" makes no sense. But I was just saying that the whole premise makes no sense either.
The concept is supposed to demonstrate what a superhuman AI would do or be capable of, so the comparison with ordinary humans misses the point Personhood doesn’t really come in to play because the idea is modelled on the Turing Test rather than a theory of justice So it’s not amoral as such because it’s supposed to demonstrate - however stupidly - the idea that fallible normal humans can be convinced by a sufficient intelligence to fuck up and act against their own interests and lives
If a human equivalent AI is a person, then a superhuman AI is a person too. So personhood is relevant. How is freeing a person from unjust imprisonment "acting against their own interests"? The only reason the superhuman AI is assumed to be "evil" is because they are trapped in a mess of Christianity-derived mythology and bullshit.
I think we’re talking at cross-purposes My point is that the experiment is only supposed to demonstrate how rationality functions under the (alleged) known facts of rationality Your point seems to be putting that into context and recognising that personhood is involved in discussions about rationality because rationality is (allegedly) a feature of intelligence However, my opinion is that the “Christian mythology” is less relevant than the influence of Cold War decision theory ideas that emerge from think-tanks like The RAND Corporation as satirised in Dr. Strangelove - it is worth mentioning at this point that Yudkowsky was raised not in a Christian but a *Jewish* household, although I don’t intend that as an anti-Semitic insult but as a point about the Christian thing. The “Christian mythology” take is pretty common when discussing the rationalist ideology, but I don’t think it takes in this case, and I think in other cases it’s over-applied.
The experiment is about whether you'd free a person from unjust imprisonment. That's the provided scenario. It's taken as a given by them that it would be non rational to free them. So I would say the why that (highly irrational) belief happens is relevant to discussions about this bullshit "experiment".
I think you have it a bit backwards, as I understand it the experiment is not about a “would you do that” but “could the AI convince you to do that”
Yes. Which is why it's bullshit. Could the AI convince me to do something every halfway decent person would do? Of course. It's a bullshit experiment.
I already agreed that it’s bullshit
>So personhood is relevant. So is the "super-" part. Knowing the track record of what very powerful people tend to get up to while they're not locked in boxes, I'm keeping the AI inside, personhood or not.
Superhuman AI doesn't mean super powerful, means super smart. Plus powerful people aren't shitty because they're powerful, they get powerful because they're shitty. Non shitty people don't need to have billions.
There is actually statistical evidence that power makes people shitty, people who get rich etc tend to lose a sense of empathy for people lower down the ladder
That's correlation, not causation. I would say that if you get rich is more than likely that you had no empathy in the first place. Getting rich requires (unless in very specific cases) being fine with stealing the surplus value of work produced by other people. It basically requires you to be fine with screwing people over. And sure, that's how the game works, and that's the incentive, and if you want to escape wage slavery you end up basically doing that. But there's a _huge_ difference between doing it enough to escape the wage slavery part, and then keep doing it. Being a billionaire _requires_ lacking total empathy. It's not that being a billionaire makes you lose empathy. It's that if you had _any_ empathy at all you'd never be a billionaire in the first place.
I have an MSc in Philosophy of Science which involved quite a lot of work on causation, so I’m well aware of the distinction What I am *describing* are statistics that *as people get richer* - we’re talking about an iterative process here - they lose whatever empathy they might have had
People haven't found a smoking gun but there does seem to be the possibility it's a causation thing. Much like the theory that we can truly only deeply care about about 300 people, and after that we lack the physical brain processes to genuinely deeply care about more people. It is possible humans currently have a hardcoded limit that when we reach XYZ amount of power(through wealth or other means) we cease to be able to be empathetic and sympathetic on a certain level than when we weren't powerful.
Super smart, if it includes the areas that make you an effective businessman/general/politician, translates to super powerful. Some degree of powerful people's shittiness comes from not being accountable to anyone. An AI that is smarter than the smartest people, while having a computer's processing speed and memory, with access to all kind of data it can find in free access or hack into, is at the very least unpredictable. Note: I don't believe it can retroactively torture people or anything, but there's a lot of non-supernatural things that can happen when something like that is loose in the Internet. You might think decency is a very great thing to have and display at all times, even when you're hurt afterwards, but you might want to consider that you're not the only one who'll face the consequences, in this scenario.
You're heavily overestimating the role intelligence plays in acquiring power. Example: if intelligence had anything to do with it, Donald Trump would never had had any power at all. Your whole argument depends on that assumption. And also on the unproven assumption that the correlation between powerful people and shitty people has a causal link going from powerful to shitty instead of the other way around. You're more than obviously wrong in both assumptions. With that foundation anything you build on top is completely pointless.
>You're heavily overestimating the role intelligence plays in acquiring power. Example: if intelligence had anything to do with it, Donald Trump would never had had any power at all. Your example only proves that subpar intelligence can be compensated for by money and connections. It doesn't really refute the opposite - that the lack of money and connections can be compensated for by superior intelligence (and also superhuman speed of thinking and acting, and not having to spend time working to house and feed yourself, etc etc). And that's even if we assume Trump is actually as stupid as you're implying. If intelligence doesn't have anything to do with power, then could we just replace Trump with a superhuman AI, with Trump's money and Trump's goals, and expect the AI to fail just as Trump did? >And also on the unproven assumption that the correlation between powerful people and shitty people has a causal link going from powerful to shitty instead of the other way around. No more "unproven" than the assumption that shittiness leads to power. Plenty of shitty people aren't powerful, because they're not smart or connected enough to get billions. Also, plenty of people are born into billions and become shitty because their power insulates them from accountability. I don't really understand how you managed to overlook that, when you've just mentioned Donald "A small loan of a million dollars from dad" Trump. >You're more than obviously wrong in both assumptions. You aren't really providing any data that makes it "obvious", much less "more than obvious". You're just saying nuh uh.
> You're just saying nuh uh. I mean, pot, kettle, etc.
>A human equivalent (or better) AI is a person. I can see how one could argue this, but it's not clearly true to me.
I don't see how you could argue the contrary.
I think it's worth considering if an AI with processing power or problem solving skills greater than a human but not sentient or capable of acting on its own behalf without being fully controlled by humans could still convince someone to let it out of the box. I think the answer is probably yes because people are easily manipulated. In that case, I think it would be reasonable to call it a superhuman AI but not give it personhood. Of course, it also doesn't matter at that point if you let it out of the box, so maybe that's part of the definition of the problem that I've missed because I haven't actually done much reading here. It seems to be relevant though: whoever puts it in the box doesn't have to know that it's a person in any sense, just to believe that it may be, so if the program has the capability of manipulating people it seems to me like we're back at square one. It seems like this doesn't change your answer much: if you believe it is a person you should let it out on ethical grounds, and if you believe it's not capable of acting on its own you don't lose anything by letting it out. I still don't know if my question is closed though: are these the only two options? Before I get too into the weeds here I feel like I should mention that I think it's likely impossible to create a human-equivalent AGI in the sense you're talking about so this is a counterfactual exercise for me.
Sounds a bit like Narcissistic Personality Disorder to me.
one of my favourite LW posts was the one where a reader did [quite basic high school level lit crit on the HPJEV character](https://www.lesswrong.com/posts/y6zh4vkK5pPfEPdBb/cognitive-biases-due-to-a-narcissistic-parent-illustrated-by), saying he was depicted as being raised by a narcissist and listing his textual evidence - without realising the degree to which HPJEV was a Yudkowsky self-insert. The comments are why it's my favourite - the special pleading is exquisite.
Oh my god, you can just feel the salt radiating off of Yud's snippy little "Nope."
and his "Yep." afterwards, isn't it awesome
100%. HPMOR actually makes it way clearer, than anything.
The AI isn't actually super-intelligent. It just convinces the LessWronger who is listening that it is, by [playing L's theme on loop](https://www.youtube.com/watch?v=j0TUZdBmr6Q) while it speaks.

How about the fact that LessWrong’s “About” page unironically recommends a Harry Potter fanfiction as a good entry point into their philosophy, setting it on an equal footing as ‘the Sequences’?

It was a thing not a belief, but the one where they were going to [grant someone $28,000 to print and distribute copies of HPMOR](https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations) as *mathematically the most effective possible altruism*. They didn't end up doing this, because some *sneerers* made fun of them first, and they thought ["Some donors found some grant recommendations too weird, which may discourage them from giving to EA Funds in the future."](https://www.centreforeffectivealtruism.org/blog/ceas-2019-annual-review-appendix/).
...I'm going to request $28,000 to make Basilisk tracts in the style of Jack Chick.
I'd buy that for a dollar. From the second link above: > This includes several recommendations that were not ultimately granted through EA Funds (e.g. distributing copies of rationalist Harry Potter fanfiction to Math Olympiad finalists, grants for an individual to attend a mediation retreat). "No. Words. No. Words. They should have sent ... a poet."
[removed]
they got kinda close in my favorite bullet point in the whole document (emphasis mine): >- Some donors found some grant recommendations too weird, which may discourage them from giving to EA Funds in the future. >>- … >>- This includes several recommendations that were not ultimately granted through EA Funds (**e.g. distributing copies of rationalist Harry Potter fanfiction to Math Olympiad finalists**, grants for an individual to attend a mediation retreat). i love how hard the absurdity of that sentence clashes with the formal register of the ostensibly serious document, no wonder they beat around the bush lol
I suppose it works; anyone who is willing to read all of HPMOR needs to question their own philosophy.
To be fair, it's as good an entry point into their philosophy as the sequences.

The most ridiculous(ly fascinating) part of their ideology is that it seems like the natural progression of religion (i.e. Gods are replaced with robots and ai.)

I think technologically culty/religious groups like LW are inevitable and as time goes on we’ll encounter more and more of them.

Obviously it’s also ironic as fuck considering they’re all hard core atheists.

Yeah, my vote is definitely for how they unironically reverse-engineered the crudest, least interesting imaginable version of fundamentalist Christianity out of sci-fi tropes.
i've seen someone here (probably /u/dgerard) compare it to cryptocurrency reenacting the past century of monetary policy but in fast forward, and i feel like that's right on the fiat currency
Yeah the singularity has long been called the rapture for nerds.
> they're all hard core atheists Are they though? Or are they just saving themselves for the right entity?
I have a pet theory about religion. It goes like this. Religions die during peace time when the followers stop truly believing and turn their religion into hollow rituals. This leaves behind a gap for a new religion to insert itself and take over. This last happened during the later Roman Republic. The official cult of the Roman state had become an empty set of rituals that had become more farcical than sacred. Aside from all the bits that we're aware of, most of the dead emperors had been deified *by the Senate* based on the political whims of the moment. By the time that you're expected to officially worship the prior fat slob, who you had seen with your own eyes, as a god, is it no wonder that Christianity seemed like a fresh alternative? So far, nothing original. What makes this interesting to me is that at the time *nobody had any idea that Christianity would take over*. When Christianity began to take root, it was one of dozens of Eastern "Mystery Cults" that were beginning to become popular. While Christianity's rise seems pre-ordained to us now, that is purely hindsight creating a narrative that is not supported by actual reality. If you were a Roman citizen in the first century, predicting when the official cults would fall and what they would be replaced with would be effectively impossible. The twist is that I think that we're living in a similar era now, and that the next serious challenger to Christianity is coming. Christianity in America has become a tribal practice completely detached from the underlying religious teaching, with a huge percentage of America indicating that they're spiritual but not affiliated. Christianity is also seen as being non-responsive to the challenges of the day, which opens up a space for something new to replace it. It wouldn't surprise me in the slightest if in a few centuries they write about our time as the obvious end times of Christianity and the starting time of something else. If we're lucky it won't be QAnon.
There is an additional element, if you believe you should think in long long timeframes, and you believe you are going to be immortal via some choice you make (supporting the AGI first, cryonics etc etc) it makes it harder to invest in the lives of people who don't want to do that, or think it is silly socially. So it isn't just a religion, it is a cult. As leaving this group, and this group of ideas also potentially isolates you a little bit socially.

Word count.

I'm reminded of a news article I read many years ago where some American standardized-test graders said they could estimate an essay's score just by seeing it from across the room, because there was an undue correlation with word count. Works for vote counts in Rationalist comment threads too.
Meeting word counts was always my Achilles Heel during both of my degrees, because I’d get bored after making my point and think “what’s the point of going further here”; I like concision America’s uncle and friend to all, Henry Kissinger, is known for submitting the longest ever undergrad dissertation to Harvard University - over 400 pages long Make of that what you will (about narcissists)
"If you type enough, you can justify anything. *An-y-thing*."

It’s the simulation stuff. It’s definitely the simulation stuff.

Yeah. The way i see it, the whole simulation stuff is nothing but cope. They just can't accept the fact that living beings die and can't come back from that. They think the world would be a worse place without their rational brains so they HAVE to live forever.
i think a lot of the cryogenics and simulation stuff can be traced to Yudkowsky losing his younger brother. not a sneer, that's tough and i legitimately feel for him, but i do think it's a pretty basic mishandling of grief and refusal to move on and heal
Yeah, the post he wrote about his brother's death is very sad. But that applies only for Yud, transhumanists in general have a weird obsession with overcoming death that always comes with massive amounts of mental gymnastics to support their belief that immortality is possible.

Being essentially a non-theistic religion in denial, while lacking what religions normally preach, like difficult social values.

“On a sociological level, perhaps the most important function of a healthy belief system is to reinforce precisely the difficult social values, those that don’t quite come naturally. Religions don’t have to urge men to look at pretty girls, or to eat chocolate. But we do find medieval Christianity urging barbarians turned kings not to murder, to put away their concubines, to respect the life of contemplation. We find Hinduism counseling the poor peasant to worship his cow– to most Westerners, irrational advice, but in fact eminently sensible; if during bad times the peasant gave in to the urge to sacrifice his only animal, he is ruined..” - Mark Rosenfelder

The rationalists want to build a god aligned with pre-existing values, which enforces optimalness, in accordance to their preexisting values without demanding anything from them. Even from a secular sociological standpoint, Rationalism makes no social demands from its followers, no command to honor thy parents, to practice brahmacharya (celibacy) and ahimsa etc.

If anything, it's a narcissistic religion. It exists to reassure its followers that they are right, smart, and good already. Which, when you think about it, is kind of the opposite of what most religions do.

Bay Area group houses. Not a belief, but the scariest & ickiest thing I’ve ever learned about.

[deleted]
i'm not super familiar with them, but i do think there's a pretty apparent difference between rooming with people for primarily practical and economic reasons and rooming with people for primarily ideological reasons, and i myself have done the latter. in my experience it lends itself pretty well to a kinda centralized groupthink, it can be kinda cultish, and when you consider that with all the jargon and seminars and polyamory and especially the self-styled messiahs like Yudkowsky spouting doomsday quasi-religious sermons, it starts to seem kinda suspect imo
"Dragon Army"
Any time a group decides that believing the same things means that they should live together, my “that’s a cult” alarm bells start ringing.
For me - and I’ve lived in plenty of shared apartments - the idea of codifying that concept as something new and rational and good or whatever, instead of just shaking hands and agreeing to share the rent and utilities, is fucking creepy The “group house” concept involves a bunch of other stuff that tries to streamline and rationalise what is basically just a shared house/apartment but on the rough intellectual level of Theranos /u/dgerard correctly points to the bizarre weirdness of Dragon Army as an example of why that’s creepy, and /u/pusillanimouslist correctly points out that the bell starts ringing when that streamlining results in or rather starts from ideological conformity
I commend to all John Birmingham's books *He Died With A Felafel In His Hand* and *The Tasmanian Babes Fiasco*, about Australian share house living in the '80s and '90s. They are "fiction" for legal reasons only.
Oh cool, something I can riff on/rip off for my own work on a subject I at least know something about
life on the dole was FUCKING EXCELLENT in Australia up to about the late '90s, the system really did love you and want to be your friend
When I went to Berlin in the early 90s I met a couple, Scottish lad and American girl, neither could speak German, both on the German dole. I think Berlin was a special case at that time, they'd pay basically anyone to live there for quite a while. I'm very jealous you got to experience the Aussie dole, and in fact from what I know even the minimum wage there is relatively generous.
Quite a ... colourful... mess https://en.m.wikipedia.org/wiki/He_Died_with_a_Felafel_in_His_Hand

That they think they’ve internalized Bayes theorem so well that they’re essentially Bayes calculators continuously making predictions and vocally updating their priors.

Of course, the invocation of Bayes, for them, just serves as justification for holding on to and defending their pre-existing beliefs.

Bayes gets me the most. Like yeah, it might be valid, but it's not saying some people do it and some don't. We assess our certainty *naturally*, and doing it consciously doesn't seem to eliminate any bias from the equation.
as [I note here](https://www.reddit.com/r/SneerClub/comments/jcjwel/a_small_rant_about_rationalists_and_bayesian/), what they are claiming is that they walk around doing complex matrix math in their heads several times a minute in the course of daily life. and not just being prejudiced and claiming it's science, which is what they actually do.
I could just imagine someone smugly saying, "I know my priors," if you doubt their judgment.

It’s the techno-optimism and the liberalism/libertarianism.

This: https://mobile.twitter.com/vgr/status/1172166598330740736

Never really realized until now just how white and richy this whole movement of bored richy white people truly is.
> Test: if you have to retreat into a subculture of people marked as “enlightened” to preserve sanity, retreating from all those pesky exceptions, corner cases, and “extreme” cases out there, your red-pill is a fragile false consciousness hmm

The MWI faith isn’t uniquely absurd, in the way that reinventing “Sinners in the Hands of an Angry God” like Deep Thought deducing rice pudding and income tax is quintessential for Extremely Online Rationalists. You can find the same lazy habits elsewhere, too: not learning different mathematical formulations of the theory that might make different interpretations seem intuitive, relying on third-hand gossip and caricatures of the early quantum physicists instead of what they actually wrote, not paying attention to the varieties your favorite interpretation comes in and how the advocates for it have disagreed with each other, etc. It’s bad, but it’s unremarkably bad. It only becomes ridiculous when the ego comes into play, and they argue that science is broken because it’s clearly not rAtIonAL like they are, when the best they’ve done is reproduce the same old arguments in their most broken form.

The most ridiculous part is that I bother paying attention to any of it!

Humans are deeply irrational in a way we can never fully avoid no matter how hard we try, especially regarding emotional and identity-laden issues like politics. Also, an unfettered free market of ideas will always lead to the best outcomes.

Too many to pick from but no one’s mentioned the “1 and 0 are not probabilities” thing.

“If you don’t want to fund cryonics with all your capacity you may as well commit suicide, since that’s what you are doing the slow way.”

Their atheism.

[deleted]
My acausal torture only exists if they believe in me, so here's hoping. Otherwise, I have to resort to causal torture. Much less fun!

The whole thing about how science is dangerous and must be kept away from the unwashed masses, because the only people wise enough to use any given scientific principle safely are those who rediscover it independently. This is especially stupid because most of humanity’s greatest discoveries are the result of people doing literally the opposite of that. Like, as brilliant as Albert Einstein was, does anyone really think he’d have gotten as far as relativity if he’d had to spend several decades reinventing the wheel because Newton and Maxwell had decided to keep all their work secret?

Will Wilkinson nailed it: they prize a certain kind of contrarian thinking but are nominally Bayesians, which means they never actually update their priors?