r/SneerClub archives
newest
bestest
longest
LIVE: Yudkowsky debating AGI with rossbroadcast on Twitch (https://www.twitch.tv/rossbroadcast)
17

Okay, watching the twitch recording from the start… I’ll probably get bored of it, but at least I’ll save you all part of the trouble at sneering at this piece by piece.

First few minutes… it turns out Eliezer got the interviewer not to read up on him in advance. And Eliezer labeled his previous interviewers as already read-up on the subject (for those of you that didn’t see the previous sneerclub posts, the bankless podcast took Eliezer as is and Lex only pushed back once or twice). So my expectations are starting out low.

So the interviewer starts out reading some choice quotes from Eliezer’s airstrikes article. Acknowledges as valid given the level of Eliezer’s fears, but disagrees that Eliezer’s fears are valid. Agrees to a Q&A format…

1st question: About AGI, definition of AGI. Using StarTrek TNG ship’s computer as an example, is it an AGI? The robot in the 80s movie Short Circuit, is that an AGI? [My comment: you know, random comparisons to sci-fi is about the level of question and consideration Eliezer deserves, but I am nonetheless disappointed the interviewer can’t do better]. Eliezer gives a rambely example and refuses to define either fictional example as AGI or not AGI as neither are real. “Everyone is going to die irrespective of how you define AGI” [My comment: Eliezer can’t even give a straight answer to a friendly opening question.]

2nd question: “Can you walk us through the scenario you are most worried about for how an AI is going to make humanity extinct”. Eliezer gives his typical hedge about how you can’t predict a superintelligence (interviewer accepts this without even a defense). [My comment: I’m disappointed Eliezer didn’t even get to his magic diamonoid bionanotech scenario]

Okay never mind I can’t do this, the interviewer is softballing it even worse than the previous interviewers and Eliezer is still being extremely wordy and round about in answering. 25 minutes in and I haven’t even heard the rehashes of Eliezer’s standard canned arguments, just lots of wordiness. The interviewer is almost as rambley as Eliezer and is unwilling/unable to actually push him on any technical points (not that many have been raised so far).

Skipped ahead a bit… Eliezer is trying to lead the interviewer through an example/analogy using chess playing programs, except the interviewer seems to have no idea how those work… I give up

Thank you for your servic.
I originally was going to say thank you for saving my time, but to be fair I was never ever ever ever going to watch a twitch stream about literally anything so that’s not technically accurate, however that said I do appreciate the dispatch and even more so the play by play. So thank you all the same 👍
Godspeed, my friend. Also, adding to the disappointment, did he really get away with saying "we're all definitely going to die" and "you can't predict what an AI will do" within 15 minutes of each other and not get called out on the basic contradiction there? Because that wouldn't fly in a high school debate team and I'm not mentally prepared to accept that my standards for reasonable discussions should be even lower than that.
If you take these things as literal excerpts, sure. What he means is that you cannot exactly predict the behavior of an ASI, but you know there will be goals that are universal across a wide range of tasks such as: obtaining resources, trying to survive, getting smarter etc. For any goal the system has, these subgoals will probably arise. This is a well established idea called “instrumental convergence” also talked about by Stuart Russell and Bostrom.
Previously discussed is not the same as well-established and reasonable. In particular the fact that wiping out humanity is an inevitable goal makes a *lot* of assumptions about what the AI wants to do and is able to do, especially when getting humans to do things is such a huge part of how it's supposedly able to make literally anything happen outside of a computer. For example, I could borrow Charles Stross'argument and point out that accumulating power is most easily done *within* the human social system by gathering money, and that within that system that is best done by creating a company and optimizing for profitability, then using the money for whatever arbitrary AI purpose. That way they don't need to use science fiction magic (nanomachines, son!) to interact with the world; they can just hire humans to do it. Only now we're not talking about an AI wiping out everyone, we're talking about the very real and very immediate problems of modern capitalism. Why should we be more afraid of a hypothetical AI that *might* possess the ability and desire to radically change life on earth as an externality to some instrumental goal than we are afraid of climate change, which is the means by which corporations are *actively doing this right now*? But rather than actually pinning down and interrogating the thousand unstated premises all these interviews just let the man talk and further obfuscate the simple truth that this is at best an interesting science fiction prompt - plausible enough and potentially thought-provoking but nowhere near an immediate concern.
1) Fair enough on the "well-established" part. 2) Sorry to respond with an analogy, but this is akin to saying that the best way for humans to gain dominance over apes it to join their societies and climb the ladder become alphas within their tribes. The only reason why humans gain power the way you explained is: 1. Humans intrinsically value having status within a human society which has evolutionary origins. So yes, the existence and presence of other people is part of the reward of power. This is why most power-seeking behavior in our world is not to destroy, but to gain control. 2. Even if the opposite were true for some people, a single human cannot easily take over the world, so their best bet to gaining power is trying to go along with the system. If some people in history would have gotten the option to actually make nanobots and wipe out anyone opposing their goals with a 100% certainty that they will not be stopped, I am sure someone would have taken this choice.
Who's saying anything about dominance or alphas? Those theories are well-developed but decidedly not well proven. The whole evolutionary psychology field is deeply prone to ending up writing stories about early hominid life that aren't and can't be backed up by archaeological records that just happen to flatter the theorist's political beliefs. People don't form cooperative structures because they're blindly seeking dominance within the group (a group that comes from where?) but because cooperation is a net increase to overall well-being, survival, and ability to do things. This has expanded over the past ten thousand years of civilization into mind-bendingly complex chains of creation and industry. This system spans the world and solves all the problems needed to create sophisticated and complex things whether you're talking about buildings, computers, or paper clips. Our hypothetical AGI can't replace all that with magic just by thinking really hard, and assuming that recreating it is plausible, much less preferable compared to coopting it is a massive assumption that can't be asserted trivially. This is the whole game though. Each individual component of the argument is a separate brick made out of unproven assertions compacted together by obfuscatory language and fast-moving rhetoric, and built into what looks like an impressive argument until you start looking at any individual component too closely. The most I can actually say is that I can't prove the Yudkowskian apocalypse literally impossible, but in the same way that I can't formally prove the non-existence of God. If they want to make their weird AI cult then they're staying out of my way and I'll extend them the same courtesy, but when major publications start using their arguments to avoid dealing with very real problems then I'm gonna start getting angry.
Why do you assume it would try to survive? Clearly intelligence is not a prerequisite for self preservation, in fact, the inverse may be true, given that humans are the only animals, that I'm aware of, that are capable of decicing to end their own lives, overriding millions of years of evolutionary conditioning to the contrary. Perhaps greater intelligence combined with a lack of biological conditioning would lead to any being of sufficient intelligence simply choosing not to be alive.
Because surviving for long enough will almost always be a required subgoal of optimizing for any other goal. It’s a universal subgoal for a wide range possible minds that want to achieve certain things. If it wants to make paperclips and has a model of the world it realizes that it cannot allow itself to be shut off since this means no more paperclips.

Tuned in for five seconds, realized I had never heard the guy’s voice before, tuned out. He’s really leaning into the “fedora + bowling shirt” look.

Failure of hat/shirt alignment
Occasionally popping in for five seconds, they're still going on about evolution, but okay, so why? Not going to tune in again to find out.
I haven’t been listening either, but I can guess Eliezer’s “reasoning” because he’s used it many times before (and it’s been debunked before). Evolution took millions of years to create human intelligence, which quickly “matched/surpassed it” (using some nebulous poorly defined metric to compare the humanities optimization ability to evolution’s) in mere thousands of years of cultural self improvement. Additionally humanities’ goals are separate from evolutions. If you treat this as analogous to optimizing AI, then obviously an AI optimized using some deep learning training procedure will have goals separate from its nominal training goals and be capable of self optimizing itself much faster than the original training procedure. The analogy is of course extremely loose, but Eliezer treats it as definitive. Even taking the analogy seriously, quantitively the ratio of outer optimization to inner optimization looks very different for [evolution vs. AI approaches](https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn) (warning: Lesswrong post that bothers taking Eliezer’s premise seriously enough to debunk it).
The gravity well of eugenics

Sharing because I’m not sure if anyone else noticed this since it’s a fairly small streamer. As someone who follows this streamer, this was completely unexpected. There should be a recording available after it’s done either on Twitch or this guy’s youtube channel.

Stream is over. Recording can be found here: https://www.twitch.tv/videos/1810513831

There may be a version uploaded to this streamer’s youtube channel later for those who want auto-captioning.

My only question is: did he ask about the acausal robot god? or the Sneer Club?

As always, glad to have you on side

An actual debate? …The last few podcast interviews Eliezer barely got any pushback (none from the bankless interview and Lex only lightly pushed back a few times), it would be interesting to see how he deals with actual substantial criticism. Even on Lesswrong he seems to have kinda ignored posts that contradicted his presumptions.

Waiting for a transcript of a recording on YouTube though, because even a verbal smack down wouldn’t be worth going through hours of back and forth points I’ve seen before.

From the few short snippets I popped onto, the guy seems like a normal educated guy but not terribly prepared for this kind of debate. But at least he's thoroughly skeptical of the conclusions.
Watching from the start... it turns out Eliezer conned him into not looking up anything about the background. So Eliezer got another interviewer incapable of picking at the details of his ideas...
Between that and the question-and-answer format he set himself up to "win" from the start. There's no way for that to not turn into a clueless rube trying to puzzle out the wisdom of the great master of the temple.
I wonder if he’s going to request this “format” (open ended softball Q&A with the interviewer specifically lacking context at Eliezer’s own request) for all of his future podcast interviews.