r/SneerClub archives
newest
bestest
longest
54

A lot of people in the LW and SSC community seem to believe that being intelligent is some sort of trump card, some kind of superpower. Therefore, in their view, superintelligence is omnipotence. If you let a security guard talk to an imprisoned Multivac the latter could wololo their way out no matter what.

Absolutely. I do have to wonder how much of it is a complete misappropriation of how that sort of thing works in more common tabletop RPGs and mot computer RPGs -- put points in your smart stat or your talking stat and you can effectively just mind control people.
Not to be unfair to the whole LW/SSC crowd, but I think there are a lot of people in it who are not particularly charismatic, don't understand charisma, and think it's basically magic. That you, too, can be infinitely charismatic - if you only had enough *intelligence* to model the stupid monkeys and their responses in real time, you too would be able to persuade them to do *anything*. Obviously if you can imagine being able to do it, a superintelligence would be able to do it in practice, right? Right?
[Sparkly Elites](https://www.lesswrong.com/posts/CKpByWmsZ8WmpHtYa/competent-elites), in which EY directly confuses charisma and intelligence
> (Interesting question: If I'm not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don't remember any stunning epiphanies in his presentation at the Summit. I didn't talk to him very long in person. He just came across as... formidable, somehow.) > And really, I've never been to any sort of power-elite gathering except those organized by the sort of person that would invite me. So, Yud can't even identify the quality that makes people appear to dazzle before him, his perception of these people is influenced by their willingness to invite him to their gatherings, and the infatuation he has for them informs a worldview wherein ultra-rich capitalist brains are actually a legitimate class of people who would benevolently rule the world. > (I'm not part of that world, though I can walk through it and be recognized as something strange but sparkly.) notice me senpai
Clearly he prefers playing a Sorcerer with the Spell Finesse feat.
They don't call it the dark arts for nothin'.
it'll definitely be from an anime, we just have to find out which one
>I was shocked, watching Goku, because from everything I'd read about martial artists before then, fighters were supposed to be fools in keikogi, who couldn't understand technology or engineers or the needs of a fragile young child, but who'd gotten ahold of large amounts of power by dint of seeming reliable to other fighters and gods. >Goku appears to be an idiot, but his understanding of fighting techniques and strategy *is visibly much smarter than average mortals'.* He is able to figure out the weakness of the strongest enemies in the universe when other fighters and even gods are clueless. Even Goku's enemies are so impressed by his intelligence that they often become allies. >Like a true genius, Goku achieves this while training on his own without a formal education. He understands enough to be able to teach his son to be the strongest fighter in the universe until he goes to public school and is ruined by a common "education".
But if we watch that anime, it would persuade us to be rationalists!
We are way to smart to fall for infohazards. So don't worry about it. e: found the anime, it is called 'the abyss'
> We are way to smart to fall for infohazards. That's exactly the first thing an infohazard would make you say! I've read many SCP articles, so I'm very wise and smart.
I secretly like the monstrous idea of harvesting all somehow vulnerable humans worldwide and sacrificing them to mitigate perceived 'dangers for all of humanity', the irony is completely lost on me, so I'm very wiss and smart.
"Reading" is the most dangerous of all infohazards.
much as Death Note viewers all came to Timeless Decision Theory themselves just from watching it
That's part of it, but I think it's also partially overextrapolation and oversimplification. The claims rationalists make about the value of intelligence are true to some degree. People are smarter than chimps, and that does allow us to do things chimps can't. But it's stupid to try to extend that in some kind of linear way. What's particularly baffling is that rationalist argument about AI will sometimes admit that we have no good reason to believe we can accurately model AI capabilities, then make a bunch of assumptions about how AI will be super smart and therefore hypercapable anyway.
Looking at it in terms of defense mechanisms it is entirely possible for an overtly hyperrational person to have some kind of deep down belief outside of conscious awareness motivating behavior in exactly that way.
[deleted]
[deleted]
> If you let a security guard talk to an imprisoned Multivac the latter could wololo their way out no matter what. solution is simple, just research [heresy](https://ageofempires.fandom.com/wiki/Heresy) so the sec guard dies after being wololo'ed
Doesn’t this trace back at least as far as Yudkowsky’s pre-LW writings where he said he could LARP as an AI and convince anyone to let him out? All I can remember is that he said he’d done it a few times with people but did not want to post the transcripts. I’d love to take him up on the challenge.
Even worse, when someone questioned why an opponent wouldn’t just commit to repeatedly saying “No” to whatever the AI said (a perfectly valid response), a bunch of other people dismissed that as an acceptable tactic, on the grounds that doing so “would defeat the purpose of the experiment.” You don’t say?
THEIR STATED STRATEGY FOR MITIGATING THE EFFECTS OF THE BASILISK IS TO BE OPEN ABOUT NOT ACCEDING TO ACAUSAL BLACKMAIL in other words, just repeatedly saying no to whatever the AI said.
He started losing, and stopped offering the challenge even for money because he "didn't like the person he became when he lost," IIRC.
My recollection was that he *only* let it be competed in by his following, so talk about sample bias.
That particular belief is actually a reasonable one.

Link to the comic, for alt-text and such fun things: http://www.smbc-comics.com/comic/passive