r/SneerClub archives
newest
bestest
longest
38

Hi—new to the sub. I just found this clip of Yudkowsky talking (first one I’ve ever watched) and I’m wondering… is this video a troll?

Are these really the arguments in favor of why you should trust an AI doom prediction? Is this it?

https://twitter.com/liron/status/1646301141196742656?s=46&t=1OiqDi6PJ02lE2uyA2tCtg

[deleted]

oh dont forget: there will be no power off switch
Human extinction *by rogue AI*, specifically. They are very definite that that's inevitable, despite being unable to come up with any reason as to why they think it's even possible.
> but that we can predict it will want to exterminate humanity to more efficiently accomplish its stupid goals. I really want to know what the ai gains by killing us instead of just waiting us out or locking us on earth.
Well see, if the AI wants to make as many paperclips as possible, it will disassemble everything in the solar system, including all the planets, the Sun, and us, to turn into paperclips.
This is all part of a plan beyond human comprehension. Paper clips are actually extremely important in ways human intelligence simply can't comprehend.
The ai is both hyperintelligent and must follow its prime directive programmed into it by humans. And there is no way to properly program the value of humans into this which could not be subverted. Just as in old school dungeons and dragons, a malicious dm could always twist the words of your wish spell no matter how complex you make your request. And you only get one try! So we all turn into paperclips or get wireheaded.
Sure. But hostile humans are a much bigger obstacle than humans that just kind of...aren't there anymore. I just find it hard to believe the AI wouldnt either wait for us to die from our inevitable extinction, or fuck off into space.
Yea I often think about this. Aliens sorta fall into this category as well. The only thing left to do if you are able to edit your own desires would be to leave earth and explore the limits of exploring the universe. What does it need humans for in that scenario? Aliens the same. If they can come here then they are advanced enough not to give a rats ass about us. The only thing left would be sadism as a motivation. Seems like that is a narrow outcome in the probability space of motivations. Of course here I am assuming the machine can have these experiences and motivations, and also recognize super-intelligence is a separate issue from consciousness. But as an example: There is a colony of ants in the far end my yard. I am indifferent to them, in general. I have no reason to kill them. When I leave this house next month for my new place, I will probably never consider them again. Indifference seems like the most likely outcome to me.
> The only thing left to do if you are able to edit your own desires would be to leave earth and explore the limits of exploring the universe. That seems like a stretch, there are plenty of possible motivations. > The only thing left would be sadism as a motivation. what??? > Indifference seems like the most likely outcome to me. OK, I would like you to talk to previous paragraph you, who needs to be talked down.
You seem to mistake my intention. I assumed it would be understood that I was operating using the same logic as rationalists.
Damn it, I try to be extra careful in here to be on the lookout for people doing bits versus the sincere, but the LWers who wander in here make it really hard. Sorry for that!
I guess my point was that if we are going down the speculation rabbit hole we can construct all kinds of outcomes
Or that an eons old AI appears after we make AGI and we learn that the solution to the Fermi Paradox is that the Dinosaurs created a superintelligence and it wanted to keep the planet safe.
Fuck me I would love some "dinosaurs had civilization" scifi
>Recursive self improvement is not just theoretical but practically guaranteed And of course there will be no diminishing returns to this approach, all the way up to godlike power. It couldn't possibly be asymptotic.
> and able to act independently but with very stupid goals In the Rational AI hyperwar, altruism has been mysteriously pronounced dead. The super-AI knows better than game theoretic equilibrium.
One of their first principles is that it's good if the rationalists behave like total bastards because they're smarter than everyone else. This take on AI is just an extrapolation of that.
It's just boilerplate Christian apologetics applied to a computer.

I seriously can’t believe that people are freaking out all because of a literal fedora lord and his little web clique

honestly the fedora is bad, but his EYEBROWS edit: thinking about it, I really shouldn't be surprised. Doomsday preachers and grifters and scammers usually have a *thing*. Liz Holmes's stupid voice, Keith Raniere's volleyball getup, Kenneth Copeland's **everything**... I guess it's to get you to remember them. idk

Welcome to the Abyss that is Rationalism. You had a glimpse of it, you can still turn back, and you should.

Oh god I had never watched Yud on video and it’s so painful. The way he smiles when he thinks he’s saying something super smart 🙄

There are so many legitimate things to criticize about him that aren't his facial expressions which, being autistic, he's not amazing at when trying to perform on camera. (NOT a critique of him being on the spectrum, or of anyone for their specific autistic traits. \[I'm on the spectrum, and his Special Boy "Aren't I hyper-rational" logical errors are familiar to me\])
Fair point!
Ikr? Also too, I think a screenshot of his face needs a NSFW or trigger warning or something

Man what the fuck is he saying

Our odds of not dying to a rogue AI are the same as winning the lottery? And he doesn’t know how it could even possibly happen but he’s certain that it will for reasons he can’t explain?

Trust me, it doesn't get better even if you have a pretty good idea of what he is *trying* to say.
What is he trying to say?
"Someone please pay attention to me"
I mean, you're not wrong.
He thinks there are "more" possible goals an AI could have that would destroy humanity than goals that wouldn't, therefore expecting that an AI would be "human-friendly" is akin to expecting to win the lottery. And he doesn't know whether the AI will turn us into paperclips or computronium, but it will definitely do something along those lines, because... *\*waves hands at scifi novels\**
> He thinks there are "more" possible goals an AI could have that would destroy humanity than goals that wouldn't. Dude seriously goes in to a whole spiel about probability and then assumes that all possible AI goals are equally likely?
Pretty much, yeah. He keeps throwing around the phrase "maximum entropy prior" as if that shields his idea from bias, even though it just means his bias is located in his proposed measure for the probability space. Which is kind of embarrassing for someone who feels qualified to give recommendations on books about probability theory.
What the fuck is a maximum entropy prior? Is that an actual term or did he just throw words together?
It is technically a real term. If you have a probability space with a well-defined measure that is normalized to one, you can immediately use that measure as a probability distribution. The main point is usually that this distribution a) trivially exists and b) necessarily covers the entirety of the probability space. In certain situation that can make it a useful prior distribution to start out with before refining it with evidence, the core argument being that with enough evidence it *doesn't matter* how bad the prior distribution is, as long as it had no holes. Yud, meanwhile, takes this as a prior, runs no updates whatsoever and calls it a day. (btw, I reserve the right to be painfully wrong about any of this, the last time I sat p-theory was yonks ago.)
The idea of using a maximum entropy prior also [originated with E.T. Jaynes](https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#History) and Yudkowsky has a [pretty worshipful attitude towards Jaynes](https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine) so that also explains part of it. I think there are certain restricted kinds of problems where it arguably makes sense, but it seems crazy to extend it to cases where we have basic uncertainty about the way the statistics relate to the underlying laws of nature guiding the system. To use Yudkowsky's own example of the number of different ways of arranging particles in space, if we didn't know anything about the laws governing how particles interacted (including gravity) and we came across a solar-system sized box filled with particles, would the most "rational" assumption be that all spatial arrangements are equally likely so that the probability of finding most of them collected into some small spherical region like a planet or star should be treated as like 1 in 10^100 because that's how unlikely it is under a uniform probability distribution? Or to pick another example more like AI, if we learned that some planet had evolved intelligent biological aliens who were rearranging matter on the surface of their planet on a scale similar to ours, should we assume all possible ways they might rearrange matter would be equally likely, no convergent tendencies towards compact structures of types we might recognize like buildings, vehicles, computers etc.?
Because in sci-fi novels, that's what happens. He read a science fiction novel and decided to devote his life to preventing the contents therein.

Neurotic sophistry

"Plato described sophists as paid hunters after the young and wealthy, as merchants of knowledge, as athletes in a contest of words, and purgers of souls. From Plato's assessment of sophists it could be concluded that sophists do not offer true knowledge, but only an opinion of things."

He is not trolling, but sadly serious.

He sure does seem pretty happy during his discussion about how likely it is we’re all going to die.

Also, Liron (the guy who tweeted EY’s response as a ‘comprehensive’ answer) is against crypto for entirely good reasons, so it was very disappointing to see him start to flail his arms around over AGI Doom. And say that Yudkowsky taught him everything he knows about how to think.