r/SneerClub archives
newest
bestest
longest
Andrew Ng tries tilting at alarmist windmills (https://twitter.com/AndrewYNg/status/1665759430552567810)
39

I read this as Andrew Yang at first and was immediately like “doesn’t he have an election campaign to fuck up somewhere instead?”

I read this as Andy Ngo and was like "the dude that only sees 'antifa' everywhere day and night yet no one else has like the Bigfoot tape?"

Broke: Christian vs Atheist debates

Woke: AI doomer vs normie debates

Andrew Ng is basically the patron saint of practical ML, so I appreciate him providing a public voice of reason on this, but I expect that any dialogue about this with any true believers is going to be an unproductive shitshow.

It’s going to end the same way as any christian vs atheist debate:

Atheist Normie: there’s no evidence that God the robot apocalypse is real

Christian AI doomer: but you can’t prove that it isn’t real, so we should assume that it is

The probability is non-zero and the negative utility is practically infinite, so we must act like it is.
Crying Wojak: "You can't just steal Pascal's Wager!" Enlightened AI Doomer: "Everything old is new again. Also, I'm smarter than theists."
Look, when we came up with this, we explicitly said it was NOT Pascal's Wager, it was better, so you can't keep comparing it to Pascal's Wager.
Listen pal, I've said I'm not a racist, so I'm afraid the fact that all my opinions and actions appear that way really says more about you than me.
You're still crying wolf!
Cults gonna cult.

we all know that andrew ng is responsible for so many people getting into this stuff

might be wishful thinking on my part, but he did say he wanted to engage with “thoughtful” people, which ought to rule out some key figures of sneerdom

Someone who actually knows some shit about ML looking around nervously at the company he thought he was keeping

How are the usual suspects reacting to this? I need to sneer.

The usual suspect reactions

I believe the dormers have a point, and the most enthusiastic is Eliezer Yudkowsky who claims we are all going to die.

The argument is actually very solid: we don’t know how it will exterminate us, because it will have thought processes that we can’t even fathom.

One argument is the PaperClip Maximizer, that will use all of its intelectual power to pursue a banal or misaligned goal. For those who believe this scenario is absurd, it has already happened. Think of Coca-Cola. Coca-Cola Inc is a maximizer whose only objective is to sell as many CocaColas as physically possible. It expanded throughout the whole globe, having the most accurate maps of all regions of the world to place a CocaCola at the reach of a consumer anywhere. Having ads and jingles known by everyone, CocaCola is the second most uttered word in the world after OK. And it is just sugared water with some caffeine.

A more capable maximizer could have turned all of humanity into producers and consumers of CocaCola (this primitive maximizer almost did). With CocaCola the only product produced and consumed, until the only thing that would be in the way of covering the whole of earth with Coke bottles would be humans. And then it would remove humans (or kill them with diabetes as a byproduct of its main goal).

AI most definitely poses an existential risk for humanity in the ways we can think of. In the ways we haven’t or aren’t capable of thinking of, it definitely is the closest thing humanity will be to an extinction level event.

You’ve successfully pulled off Poe’s Law, I can’t tell if this is a sarcastic parody or deadly serious. The Coke example almost seems like a funny twist on the paper clip maximizer, but it’s a bit too seriously explained…
It reads a bit GPT-ish
I was going for a serious representation of an absurd affirmation. That is to say, if you were to try to explain the omnipresence of CocaCola to a preindustrial person, it would sound as farfetched as the PaperClip Maximizer to us. The threat of AI is so abstract, that trying to grasp it is futile. Anything is possible once you claim that there will be an intelligence that goes beyond our comprehension. Any scenario is plausible, because they are all equally incomprehensible.
Singularitarian fideism.
You don’t understand, Yudkowsky has already figured exactly how a superintelligent AGI god beyond all human comprehension will behave because Yudkowsky is a superintelligence beyond all human comprehension. He read *Feynman* as a *child.*
Since you thought of it, the probability is now non-zero. Good job.
Not only that, it now has the potential to kill us all. Nice going.
In a good number of timelines, it DOES kill us all. Many worlds, sucka!
While writing that nonsense, did you give any thought to the actual, immediate risks of ai? This doomer hypothetical stuff would be good old fashioned fun if it wasn't actively harmful -- not to mention a grift that you're helping enable.
“argument”
TLDR: my imagination